* [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
@ 2016-11-09 17:13 Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler() Stefan Hajnoczi
` (6 more replies)
0 siblings, 7 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-09 17:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Karl Rister, Fam Zheng, Stefan Hajnoczi
Recent performance investigation work done by Karl Rister shows that the
guest->host notification takes around 20 us. This is more than the "overhead"
of QEMU itself (e.g. block layer).
One way to avoid the costly exit is to use polling instead of notification.
The main drawback of polling is that it consumes CPU resources. In order to
benefit performance the host must have extra CPU cycles available on physical
CPUs that aren't used by the guest.
This is an experimental AioContext polling implementation. It adds a polling
callback into the event loop. Polling functions are implemented for virtio-blk
virtqueue guest->host kick and Linux AIO completion.
The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
poll before entering the usual blocking poll(2) syscall. Try setting this
variable to the time from old request completion to new virtqueue kick.
By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
polling!
Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
values. If you don't find a good value we should double-check the tracing data
to see if this experimental code can be improved.
Stefan Hajnoczi (3):
aio-posix: add aio_set_poll_handler()
virtio: poll virtqueues for new buffers
linux-aio: poll ring for completions
aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
block/linux-aio.c | 17 +++++++
hw/virtio/virtio.c | 19 ++++++++
include/block/aio.h | 16 +++++++
4 files changed, 185 insertions(+)
--
2.7.4
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler()
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
@ 2016-11-09 17:13 ` Stefan Hajnoczi
2016-11-09 17:30 ` Paolo Bonzini
2016-11-15 20:14 ` Fam Zheng
2016-11-09 17:13 ` [Qemu-devel] [RFC 2/3] virtio: poll virtqueues for new buffers Stefan Hajnoczi
` (5 subsequent siblings)
6 siblings, 2 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-09 17:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Karl Rister, Fam Zheng, Stefan Hajnoczi
Poll handlers are executed for a certain amount of time before the event
loop polls file descriptors. This can be used to keep the event loop
thread scheduled and may therefore recognize events faster than blocking
poll(2) calls.
This is an experimental feature to reduce I/O latency in high IOPS
scenarios.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
include/block/aio.h | 16 +++++++
2 files changed, 149 insertions(+)
diff --git a/aio-posix.c b/aio-posix.c
index e13b9ab..933a972 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -18,6 +18,7 @@
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
+#include "qemu/cutils.h"
#ifdef CONFIG_EPOLL_CREATE1
#include <sys/epoll.h>
#endif
@@ -33,6 +34,19 @@ struct AioHandler
QLIST_ENTRY(AioHandler) node;
};
+struct AioPollHandler {
+ QLIST_ENTRY(AioPollHandler) node;
+
+ AioPollFn *poll_fn; /* check whether to invoke io_fn() */
+ IOHandler *io_fn; /* handler callback */
+ void *opaque; /* user-defined argument to callbacks */
+
+ bool deleted;
+};
+
+/* How long to poll AioPollHandlers before monitoring file descriptors */
+static int64_t aio_poll_max_ns;
+
#ifdef CONFIG_EPOLL_CREATE1
/* The fd number threashold to switch to epoll */
@@ -264,8 +278,61 @@ void aio_set_event_notifier(AioContext *ctx,
is_external, (IOHandler *)io_read, NULL, notifier);
}
+static AioPollHandler *find_aio_poll_handler(AioContext *ctx,
+ AioPollFn *poll_fn,
+ void *opaque)
+{
+ AioPollHandler *node;
+
+ QLIST_FOREACH(node, &ctx->aio_poll_handlers, node) {
+ if (node->poll_fn == poll_fn &&
+ node->opaque == opaque) {
+ if (!node->deleted) {
+ return node;
+ }
+ }
+ }
+
+ return NULL;
+}
+
+void aio_set_poll_handler(AioContext *ctx,
+ AioPollFn *poll_fn,
+ IOHandler *io_fn,
+ void *opaque)
+{
+ AioPollHandler *node;
+
+ node = find_aio_poll_handler(ctx, poll_fn, opaque);
+ if (!io_fn) { /* remove */
+ if (!node) {
+ return;
+ }
+
+ if (ctx->walking_poll_handlers) {
+ node->deleted = true;
+ } else {
+ QLIST_REMOVE(node, node);
+ g_free(node);
+ }
+ } else { /* add or update */
+ if (!node) {
+ node = g_new(AioPollHandler, 1);
+ QLIST_INSERT_HEAD(&ctx->aio_poll_handlers, node, node);
+ }
+
+ node->poll_fn = poll_fn;
+ node->io_fn = io_fn;
+ node->opaque = opaque;
+ }
+
+ aio_notify(ctx);
+}
+
+
bool aio_prepare(AioContext *ctx)
{
+ /* TODO run poll handlers? */
return false;
}
@@ -400,6 +467,47 @@ static void add_pollfd(AioHandler *node)
npfd++;
}
+static bool run_poll_handlers(AioContext *ctx)
+{
+ int64_t start_time;
+ unsigned int loop_count = 0;
+ bool fired = false;
+
+ /* Is there any polling to be done? */
+ if (!QLIST_FIRST(&ctx->aio_poll_handlers)) {
+ return false;
+ }
+
+ start_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+ while (!fired) {
+ AioPollHandler *node;
+ AioPollHandler *tmp;
+
+ QLIST_FOREACH_SAFE(node, &ctx->aio_poll_handlers, node, tmp) {
+ ctx->walking_poll_handlers++;
+ if (!node->deleted && node->poll_fn(node->opaque)) {
+ node->io_fn(node->opaque);
+ fired = true;
+ }
+ ctx->walking_poll_handlers--;
+
+ if (!ctx->walking_poll_handlers && node->deleted) {
+ QLIST_REMOVE(node, node);
+ g_free(node);
+ }
+ }
+
+ loop_count++;
+ if ((loop_count % 1024) == 0 &&
+ qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start_time >
+ aio_poll_max_ns) {
+ break;
+ }
+ }
+
+ return fired;
+}
+
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
@@ -410,6 +518,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
aio_context_acquire(ctx);
progress = false;
+ if (aio_poll_max_ns &&
+ /* see qemu_soonest_timeout() uint64_t hack */
+ (uint64_t)aio_compute_timeout(ctx) > (uint64_t)aio_poll_max_ns) {
+ if (run_poll_handlers(ctx)) {
+ progress = true;
+ blocking = false; /* poll again, don't block */
+ }
+ }
+
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
@@ -484,6 +601,22 @@ bool aio_poll(AioContext *ctx, bool blocking)
void aio_context_setup(AioContext *ctx)
{
+ if (!aio_poll_max_ns) {
+ int64_t val;
+ const char *env_str = getenv("QEMU_AIO_POLL_MAX_NS");
+
+ if (!env_str) {
+ env_str = "0";
+ }
+
+ if (!qemu_strtoll(env_str, NULL, 10, &val)) {
+ aio_poll_max_ns = val;
+ } else {
+ fprintf(stderr, "Unable to parse QEMU_AIO_POLL_MAX_NS "
+ "environment variable\n");
+ }
+ }
+
#ifdef CONFIG_EPOLL_CREATE1
assert(!ctx->epollfd);
ctx->epollfd = epoll_create1(EPOLL_CLOEXEC);
diff --git a/include/block/aio.h b/include/block/aio.h
index c7ae27c..2be1955 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -42,8 +42,10 @@ void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs,
void qemu_aio_unref(void *p);
void qemu_aio_ref(void *p);
+typedef struct AioPollHandler AioPollHandler;
typedef struct AioHandler AioHandler;
typedef void QEMUBHFunc(void *opaque);
+typedef bool AioPollFn(void *opaque);
typedef void IOHandler(void *opaque);
struct ThreadPool;
@@ -64,6 +66,15 @@ struct AioContext {
*/
int walking_handlers;
+ /* The list of registered AIO poll handlers */
+ QLIST_HEAD(, AioPollHandler) aio_poll_handlers;
+
+ /* This is a simple lock used to protect the aio_poll_handlers list.
+ * Specifically, it's used to ensure that no callbacks are removed while
+ * we're walking and dispatching callbacks.
+ */
+ int walking_poll_handlers;
+
/* Used to avoid unnecessary event_notifier_set calls in aio_notify;
* accessed with atomic primitives. If this field is 0, everything
* (file descriptors, bottom halves, timers) will be re-evaluated
@@ -327,6 +338,11 @@ void aio_set_fd_handler(AioContext *ctx,
IOHandler *io_write,
void *opaque);
+void aio_set_poll_handler(AioContext *ctx,
+ AioPollFn *poll_fn,
+ IOHandler *io_fn,
+ void *opaque);
+
/* Register an event notifier and associated callbacks. Behaves very similarly
* to event_notifier_set_handler. Unlike event_notifier_set_handler, these callbacks
* will be invoked when using aio_poll().
--
2.7.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [Qemu-devel] [RFC 2/3] virtio: poll virtqueues for new buffers
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler() Stefan Hajnoczi
@ 2016-11-09 17:13 ` Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 3/3] linux-aio: poll ring for completions Stefan Hajnoczi
` (4 subsequent siblings)
6 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-09 17:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Karl Rister, Fam Zheng, Stefan Hajnoczi
Add an AioContext poll handler to detect new virtqueue buffers without
waiting for a guest->host notification.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/virtio/virtio.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index bcbcfe0..b191e01 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -2026,15 +2026,34 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
}
}
+static bool virtio_queue_aio_poll(void *opaque)
+{
+ VirtQueue *vq = opaque;
+
+ return !virtio_queue_empty(vq);
+}
+
+static void virtio_queue_aio_poll_handler(void *opaque)
+{
+ VirtQueue *vq = opaque;
+
+ virtio_queue_notify_aio_vq(vq);
+}
+
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
VirtIOHandleOutput handle_output)
{
if (handle_output) {
vq->handle_aio_output = handle_output;
+ aio_set_poll_handler(ctx,
+ virtio_queue_aio_poll,
+ virtio_queue_aio_poll_handler,
+ vq);
aio_set_event_notifier(ctx, &vq->host_notifier, true,
virtio_queue_host_notifier_aio_read);
} else {
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
+ aio_set_poll_handler(ctx, virtio_queue_aio_poll, NULL, vq);
/* Test and clear notifier before after disabling event,
* in case poll callback didn't have time to run. */
virtio_queue_host_notifier_aio_read(&vq->host_notifier);
--
2.7.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [Qemu-devel] [RFC 3/3] linux-aio: poll ring for completions
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler() Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 2/3] virtio: poll virtqueues for new buffers Stefan Hajnoczi
@ 2016-11-09 17:13 ` Stefan Hajnoczi
2016-11-11 19:59 ` [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Karl Rister
` (3 subsequent siblings)
6 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-09 17:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Karl Rister, Fam Zheng, Stefan Hajnoczi
The Linux AIO userspace ABI includes a ring that is shared with the
kernel. This allows userspace programs to process completions without
system calls.
Add an AioContext poll handler to check for completions in the ring.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
block/linux-aio.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/block/linux-aio.c b/block/linux-aio.c
index 1685ec2..58446dc 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -255,6 +255,21 @@ static void qemu_laio_completion_cb(EventNotifier *e)
}
}
+static bool laio_poll(void *opaque)
+{
+ LinuxAioState *s = opaque;
+ struct io_event *events;
+
+ return io_getevents_peek(s->ctx, &events);
+}
+
+static void laio_poll_handler(void *opaque)
+{
+ LinuxAioState *s = opaque;
+
+ qemu_laio_process_completions_and_submit(s);
+}
+
static void laio_cancel(BlockAIOCB *blockacb)
{
struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
@@ -440,6 +455,7 @@ BlockAIOCB *laio_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
{
aio_set_event_notifier(old_context, &s->e, false, NULL);
+ aio_set_poll_handler(old_context, laio_poll, NULL, s);
qemu_bh_delete(s->completion_bh);
}
@@ -447,6 +463,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
{
s->aio_context = new_context;
s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
+ aio_set_poll_handler(new_context, laio_poll, laio_poll_handler, s);
aio_set_event_notifier(new_context, &s->e, false,
qemu_laio_completion_cb);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler()
2016-11-09 17:13 ` [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler() Stefan Hajnoczi
@ 2016-11-09 17:30 ` Paolo Bonzini
2016-11-10 10:17 ` Stefan Hajnoczi
2016-11-15 20:14 ` Fam Zheng
1 sibling, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-11-09 17:30 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel; +Cc: Karl Rister, Fam Zheng
On 09/11/2016 18:13, Stefan Hajnoczi wrote:
> Poll handlers are executed for a certain amount of time before the event
> loop polls file descriptors. This can be used to keep the event loop
> thread scheduled and may therefore recognize events faster than blocking
> poll(2) calls.
>
> This is an experimental feature to reduce I/O latency in high IOPS
> scenarios.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> include/block/aio.h | 16 +++++++
> 2 files changed, 149 insertions(+)
>
> diff --git a/aio-posix.c b/aio-posix.c
> index e13b9ab..933a972 100644
> --- a/aio-posix.c
> +++ b/aio-posix.c
> @@ -18,6 +18,7 @@
> #include "block/block.h"
> #include "qemu/queue.h"
> #include "qemu/sockets.h"
> +#include "qemu/cutils.h"
> #ifdef CONFIG_EPOLL_CREATE1
> #include <sys/epoll.h>
> #endif
> @@ -33,6 +34,19 @@ struct AioHandler
> QLIST_ENTRY(AioHandler) node;
> };
>
> +struct AioPollHandler {
> + QLIST_ENTRY(AioPollHandler) node;
> +
> + AioPollFn *poll_fn; /* check whether to invoke io_fn() */
> + IOHandler *io_fn; /* handler callback */
> + void *opaque; /* user-defined argument to callbacks */
> +
> + bool deleted;
> +};
> +
> +/* How long to poll AioPollHandlers before monitoring file descriptors */
> +static int64_t aio_poll_max_ns;
> +
> #ifdef CONFIG_EPOLL_CREATE1
>
> /* The fd number threashold to switch to epoll */
> @@ -264,8 +278,61 @@ void aio_set_event_notifier(AioContext *ctx,
> is_external, (IOHandler *)io_read, NULL, notifier);
> }
>
> +static AioPollHandler *find_aio_poll_handler(AioContext *ctx,
> + AioPollFn *poll_fn,
> + void *opaque)
> +{
> + AioPollHandler *node;
> +
> + QLIST_FOREACH(node, &ctx->aio_poll_handlers, node) {
> + if (node->poll_fn == poll_fn &&
> + node->opaque == opaque) {
> + if (!node->deleted) {
> + return node;
> + }
> + }
> + }
> +
> + return NULL;
> +}
> +
> +void aio_set_poll_handler(AioContext *ctx,
> + AioPollFn *poll_fn,
> + IOHandler *io_fn,
> + void *opaque)
> +{
> + AioPollHandler *node;
> +
> + node = find_aio_poll_handler(ctx, poll_fn, opaque);
> + if (!io_fn) { /* remove */
> + if (!node) {
> + return;
> + }
> +
> + if (ctx->walking_poll_handlers) {
> + node->deleted = true;
> + } else {
> + QLIST_REMOVE(node, node);
> + g_free(node);
> + }
> + } else { /* add or update */
> + if (!node) {
> + node = g_new(AioPollHandler, 1);
> + QLIST_INSERT_HEAD(&ctx->aio_poll_handlers, node, node);
> + }
> +
> + node->poll_fn = poll_fn;
> + node->io_fn = io_fn;
> + node->opaque = opaque;
> + }
> +
> + aio_notify(ctx);
> +}
> +
> +
> bool aio_prepare(AioContext *ctx)
> {
> + /* TODO run poll handlers? */
> return false;
> }
>
> @@ -400,6 +467,47 @@ static void add_pollfd(AioHandler *node)
> npfd++;
> }
>
> +static bool run_poll_handlers(AioContext *ctx)
> +{
> + int64_t start_time;
> + unsigned int loop_count = 0;
> + bool fired = false;
> +
> + /* Is there any polling to be done? */
I think the question is not "is there any polling to be done" but rather
"is there anything that requires looking at a file descriptor". If you
have e.g. an NBD device on the AioContext you cannot poll. On the other
hand if all you have is bottom halves (which you can poll with
ctx->notified), AIO and virtio ioeventfds, you can poll.
In particular, testing for bottom halves is necessary to avoid incurring
extra latency on flushes, which use the thread pool.
Perhaps the poll handler could be a parameter to aio_set_event_notifier?
run_poll_handlers can just set revents (to G_IO_IN for example) if the
polling handler returns true, and return true as well. aio_poll can
then call aio_notify_accept and aio_dispatch, bypassing the poll system
call altogether.
Thanks,
Paolo
> + if (!QLIST_FIRST(&ctx->aio_poll_handlers)) {
> + return false;
> + }
> +
> + start_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
> + while (!fired) {
> + AioPollHandler *node;
> + AioPollHandler *tmp;
> +
> + QLIST_FOREACH_SAFE(node, &ctx->aio_poll_handlers, node, tmp) {
> + ctx->walking_poll_handlers++;
> + if (!node->deleted && node->poll_fn(node->opaque)) {
> + node->io_fn(node->opaque);
> + fired = true;
> + }
> + ctx->walking_poll_handlers--;
> +
> + if (!ctx->walking_poll_handlers && node->deleted) {
> + QLIST_REMOVE(node, node);
> + g_free(node);
> + }
> + }
> +
> + loop_count++;
> + if ((loop_count % 1024) == 0 &&
> + qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start_time >
> + aio_poll_max_ns) {
> + break;
> + }
> + }
> +
> + return fired;
> +}
> +
> bool aio_poll(AioContext *ctx, bool blocking)
> {
> AioHandler *node;
> @@ -410,6 +518,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
> aio_context_acquire(ctx);
> progress = false;
>
> + if (aio_poll_max_ns &&
> + /* see qemu_soonest_timeout() uint64_t hack */
> + (uint64_t)aio_compute_timeout(ctx) > (uint64_t)aio_poll_max_ns) {
> + if (run_poll_handlers(ctx)) {
> + progress = true;
> + blocking = false; /* poll again, don't block */
> + }
> + }
> +
> /* aio_notify can avoid the expensive event_notifier_set if
> * everything (file descriptors, bottom halves, timers) will
> * be re-evaluated before the next blocking poll(). This is
> @@ -484,6 +601,22 @@ bool aio_poll(AioContext *ctx, bool blocking)
>
> void aio_context_setup(AioContext *ctx)
> {
> + if (!aio_poll_max_ns) {
> + int64_t val;
> + const char *env_str = getenv("QEMU_AIO_POLL_MAX_NS");
> +
> + if (!env_str) {
> + env_str = "0";
> + }
> +
> + if (!qemu_strtoll(env_str, NULL, 10, &val)) {
> + aio_poll_max_ns = val;
> + } else {
> + fprintf(stderr, "Unable to parse QEMU_AIO_POLL_MAX_NS "
> + "environment variable\n");
> + }
> + }
> +
> #ifdef CONFIG_EPOLL_CREATE1
> assert(!ctx->epollfd);
> ctx->epollfd = epoll_create1(EPOLL_CLOEXEC);
> diff --git a/include/block/aio.h b/include/block/aio.h
> index c7ae27c..2be1955 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -42,8 +42,10 @@ void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs,
> void qemu_aio_unref(void *p);
> void qemu_aio_ref(void *p);
>
> +typedef struct AioPollHandler AioPollHandler;
> typedef struct AioHandler AioHandler;
> typedef void QEMUBHFunc(void *opaque);
> +typedef bool AioPollFn(void *opaque);
> typedef void IOHandler(void *opaque);
>
> struct ThreadPool;
> @@ -64,6 +66,15 @@ struct AioContext {
> */
> int walking_handlers;
>
> + /* The list of registered AIO poll handlers */
> + QLIST_HEAD(, AioPollHandler) aio_poll_handlers;
> +
> + /* This is a simple lock used to protect the aio_poll_handlers list.
> + * Specifically, it's used to ensure that no callbacks are removed while
> + * we're walking and dispatching callbacks.
> + */
> + int walking_poll_handlers;
> +
> /* Used to avoid unnecessary event_notifier_set calls in aio_notify;
> * accessed with atomic primitives. If this field is 0, everything
> * (file descriptors, bottom halves, timers) will be re-evaluated
> @@ -327,6 +338,11 @@ void aio_set_fd_handler(AioContext *ctx,
> IOHandler *io_write,
> void *opaque);
>
> +void aio_set_poll_handler(AioContext *ctx,
> + AioPollFn *poll_fn,
> + IOHandler *io_fn,
> + void *opaque);
> +
> /* Register an event notifier and associated callbacks. Behaves very similarly
> * to event_notifier_set_handler. Unlike event_notifier_set_handler, these callbacks
> * will be invoked when using aio_poll().
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler()
2016-11-09 17:30 ` Paolo Bonzini
@ 2016-11-10 10:17 ` Stefan Hajnoczi
2016-11-10 13:20 ` Paolo Bonzini
0 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-10 10:17 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, Karl Rister, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 2841 bytes --]
On Wed, Nov 09, 2016 at 06:30:11PM +0100, Paolo Bonzini wrote:
Thanks for the feedback. I hope that Karl will be able to find a
QEMU_AIO_POLL_MAX_NS setting that improves the benchmark. At that point
I'll send a new version of this series so we can iron out the details.
> > +static bool run_poll_handlers(AioContext *ctx)
> > +{
> > + int64_t start_time;
> > + unsigned int loop_count = 0;
> > + bool fired = false;
> > +
> > + /* Is there any polling to be done? */
>
> I think the question is not "is there any polling to be done" but rather
> "is there anything that requires looking at a file descriptor". If you
> have e.g. an NBD device on the AioContext you cannot poll. On the other
> hand if all you have is bottom halves (which you can poll with
> ctx->notified), AIO and virtio ioeventfds, you can poll.
This is a good point. Polling should only be done if all resources in
the AioContext benefit from polling - otherwise it adds latency to
resources that don't support polling.
Another thing: only poll if there is work to be done. Linux AIO must
only poll the ring when there are >0 requests outstanding. Currently it
always polls (doh!).
> In particular, testing for bottom halves is necessary to avoid incurring
> extra latency on flushes, which use the thread pool.
The current code uses a half-solution: it uses aio_compute_timeout() to
see if any existing BHs are ready to execute *before* beginning to poll.
Really we should poll BHs since they can be scheduled during the polling
loop.
> Perhaps the poll handler could be a parameter to aio_set_event_notifier?
> run_poll_handlers can just set revents (to G_IO_IN for example) if the
> polling handler returns true, and return true as well. aio_poll can
> then call aio_notify_accept and aio_dispatch, bypassing the poll system
> call altogether.
This is problematic. The poll source != file descriptor so there is a
race condition:
1. Guest increments virtqueue avail.idx
2. QEMU poll notices avail.idx update and marks fd.revents readable.
3. QEMU dispatches fd handler:
void virtio_queue_host_notifier_read(EventNotifier *n)
{
VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
if (event_notifier_test_and_clear(n)) {
virtio_queue_notify_vq(vq);
}
}
4. Guest kicks virtqueue -> ioeventfd is signalled
Unfortunately polling is "too fast" and event_notifier_test_and_clear()
returns false; we won't process the virtqueue!
Pretending that polling is the same as fd monitoring only works when #4
happens before #3. We have to solve this race condition.
The simplest solution is to get rid of the if statement (i.e. enable
spurious event processing). Not sure if that has a drawback though.
Do you have a nicer solution in mind?
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler()
2016-11-10 10:17 ` Stefan Hajnoczi
@ 2016-11-10 13:20 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-11-10 13:20 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, Karl Rister, Fam Zheng
On Thursday, November 10, 2016 11:17:35 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > I think the question is not "is there any polling to be done" but rather
> > "is there anything that requires looking at a file descriptor". If you
> > have e.g. an NBD device on the AioContext you cannot poll. On the other
> > hand if all you have is bottom halves (which you can poll with
> > ctx->notified), AIO and virtio ioeventfds, you can poll.
>
> This is a good point. Polling should only be done if all resources in
> the AioContext benefit from polling - otherwise it adds latency to
> resources that don't support polling.
>
> Another thing: only poll if there is work to be done. Linux AIO must
> only poll the ring when there are >0 requests outstanding. Currently it
> always polls (doh!).
Good idea. So the result of polling callback could be one of ready, not
ready and not active? Or did you have in mind something else?
> > In particular, testing for bottom halves is necessary to avoid incurring
> > extra latency on flushes, which use the thread pool.
>
> The current code uses a half-solution: it uses aio_compute_timeout() to
> see if any existing BHs are ready to execute *before* beginning to poll.
>
> Really we should poll BHs since they can be scheduled during the polling
> loop.
We should do so for correctness (hopefully with just ctx->notified: there
should be no need to walk the BH list during polling). However, the user
of the BH should activate polling "manually" by registering its own
polling handler: if there are no active polling handlers, just ignore
bottom halves and do the poll().
This is because there are always a handful of registered bottom halves, but
they are not necessarily "activatable" from other threads. For example the
thread pool always has one BH but as you noticed for Linux AIO, it may not
have any pending requests. So the thread pool would still have to register
with aio_set_poll_handler, even if it uses bottom halves internally for
the signaling. I guess it would not need to register an associated IOHandler,
since it can just use aio_bh_poll.
A couple more random observations:
- you can pass the output of aio_compute_timeout(ctx) to run_poll_handlers,
like MIN((uint64_t)aio_compute_timeout(ctx), (uint64_t)aio_poll_max_ns).
- since we know that all resources are pollable, we don't need to poll() at
all if polling succeeds (though we do need aio_notify_accept()+aio_bh_poll()).
> > Perhaps the poll handler could be a parameter to aio_set_event_notifier?
> > run_poll_handlers can just set revents (to G_IO_IN for example) if the
> > polling handler returns true, and return true as well. aio_poll can
> > then call aio_notify_accept and aio_dispatch, bypassing the poll system
> > call altogether.
>
> This is problematic. The poll source != file descriptor so there is a
> race condition:
>
> 1. Guest increments virtqueue avail.idx
> 2. QEMU poll notices avail.idx update and marks fd.revents readable.
> 3. QEMU dispatches fd handler:
> 4. Guest kicks virtqueue -> ioeventfd is signalled
>
> Unfortunately polling is "too fast" and event_notifier_test_and_clear()
> returns false; we won't process the virtqueue!
>
> Pretending that polling is the same as fd monitoring only works when #4
> happens before #3. We have to solve this race condition.
>
> The simplest solution is to get rid of the if statement (i.e. enable
> spurious event processing). Not sure if that has a drawback though.
> Do you have a nicer solution in mind?
No, I don't. Removing the if seems sensible, but I like the polling handler
more now that I know why it's there. The event_notifier_test_and_clear does
add a small latency.
On one hand, because you need to check if *all* "resources"
support polling, you need a common definition of "resource" (e.g.
aio_set_fd_handler). But on the other hand it would be nice to have
a streamlined polling callback. I guess you could add something like
aio_set_handler that takes a struct with all interesting callbacks:
- in/out callbacks (for aio_set_fd_handler)
- polling handler
- polling callback
Then there would be simplified interfaces on top, such as aio_set_fd_handler,
aio_set_event_notifier and your own aio_set_poll_handler.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
` (2 preceding siblings ...)
2016-11-09 17:13 ` [Qemu-devel] [RFC 3/3] linux-aio: poll ring for completions Stefan Hajnoczi
@ 2016-11-11 19:59 ` Karl Rister
2016-11-14 13:53 ` Fam Zheng
2016-11-14 15:26 ` Stefan Hajnoczi
2016-11-13 6:20 ` no-reply
` (2 subsequent siblings)
6 siblings, 2 replies; 29+ messages in thread
From: Karl Rister @ 2016-11-11 19:59 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel; +Cc: Paolo Bonzini, Fam Zheng, Andrew Theurer
On 11/09/2016 11:13 AM, Stefan Hajnoczi wrote:
> Recent performance investigation work done by Karl Rister shows that the
> guest->host notification takes around 20 us. This is more than the "overhead"
> of QEMU itself (e.g. block layer).
>
> One way to avoid the costly exit is to use polling instead of notification.
> The main drawback of polling is that it consumes CPU resources. In order to
> benefit performance the host must have extra CPU cycles available on physical
> CPUs that aren't used by the guest.
>
> This is an experimental AioContext polling implementation. It adds a polling
> callback into the event loop. Polling functions are implemented for virtio-blk
> virtqueue guest->host kick and Linux AIO completion.
>
> The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
> poll before entering the usual blocking poll(2) syscall. Try setting this
> variable to the time from old request completion to new virtqueue kick.
>
> By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
> polling!
>
> Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> values. If you don't find a good value we should double-check the tracing data
> to see if this experimental code can be improved.
Stefan
I ran some quick tests with your patches and got some pretty good gains,
but also some seemingly odd behavior.
These results are for a 5 minute test doing sequential 4KB requests from
fio using O_DIRECT, libaio, and IO depth of 1. The requests are
performed directly against the virtio-blk device (no filesystem) which
is backed by a 400GB NVme card.
QEMU_AIO_POLL_MAX_NS IOPs
unset 31,383
1 46,860
2 46,440
4 35,246
8 34,973
16 46,794
32 46,729
64 35,520
128 45,902
I found the results for 4, 8, and 64 odd so I re-ran some tests to check
for consistency. I used values of 2 and 4 and ran each 5 times. Here
is what I got:
Iteration QEMU_AIO_POLL_MAX_NS=2 QEMU_AIO_POLL_MAX_NS=4
1 46,972 35,434
2 46,939 35,719
3 47,005 35,584
4 47,016 35,615
5 47,267 35,474
So the results seem consistent.
I saw some discussion on the patches made which make me think you'll be
making some changes, is that right? If so, I may wait for the updates
and then we can run the much more exhaustive set of workloads
(sequential read and write, random read and write) at various block
sizes (4, 8, 16, 32, 64, 128, and 256) and multiple IO depths (1 and 32)
that we were doing when we started looking at this.
Karl
>
> Stefan Hajnoczi (3):
> aio-posix: add aio_set_poll_handler()
> virtio: poll virtqueues fr new buffers
> linux-aio: poll ring for ompletions
>
> aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> block/linux-aio.c | 17 +++++++
> hw/virtio/virtio.c | 19 ++++++++
> include/block/aio.h | 16 +++++++
> 4 files changed, 185 insertions(+)
>
--
Karl Rister <krister@redhat.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
` (3 preceding siblings ...)
2016-11-11 19:59 ` [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Karl Rister
@ 2016-11-13 6:20 ` no-reply
2016-11-14 14:51 ` Christian Borntraeger
2016-11-14 14:59 ` Christian Borntraeger
6 siblings, 0 replies; 29+ messages in thread
From: no-reply @ 2016-11-13 6:20 UTC (permalink / raw)
To: stefanha; +Cc: famz, qemu-devel, pbonzini
Hi,
Your series failed automatic build test. Please find the testing commands and
their output below. If you have docker installed, you can probably reproduce it
locally.
Type: series
Subject: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
Message-id: 1478711602-12620-1-git-send-email-stefanha@redhat.com
=== TEST SCRIPT BEGIN ===
#!/bin/bash
set -e
git submodule update --init dtc
# Let docker tests dump environment info
export SHOW_ENV=1
export J=16
make docker-test-quick@centos6
make docker-test-mingw@fedora
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
40e0074 linux-aio: poll ring for completions
c7e69fe virtio: poll virtqueues for new buffers
3129add aio-posix: add aio_set_poll_handler()
=== OUTPUT BEGIN ===
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf'
BUILD centos6
make[1]: Entering directory `/var/tmp/patchew-tester-tmp-c68h00s0/src'
ARCHIVE qemu.tgz
ARCHIVE dtc.tgz
COPY RUNNER
RUN test-quick in qemu:centos6
Packages installed:
SDL-devel-1.2.14-7.el6_7.1.x86_64
ccache-3.1.6-2.el6.x86_64
epel-release-6-8.noarch
gcc-4.4.7-17.el6.x86_64
git-1.7.1-4.el6_7.1.x86_64
glib2-devel-2.28.8-5.el6.x86_64
libfdt-devel-1.4.0-1.el6.x86_64
make-3.81-23.el6.x86_64
package g++ is not installed
pixman-devel-0.32.8-1.el6.x86_64
tar-1.23-15.el6_8.x86_64
zlib-devel-1.2.3-29.el6.x86_64
Environment variables:
PACKAGES=libfdt-devel ccache tar git make gcc g++ zlib-devel glib2-devel SDL-devel pixman-devel epel-release
HOSTNAME=50ce3cd670bd
TERM=xterm
MAKEFLAGS= -j16
HISTSIZE=1000
J=16
USER=root
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
MAIL=/var/spool/mail/root
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
LANG=en_US.UTF-8
TARGET_LIST=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
TEST_DIR=/tmp/qemu-test
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
FEATURES= dtc
DEBUG=
G_BROKEN_FILENAMES=1
CCACHE_HASHDIR=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/var/tmp/qemu-build/install
No C++ compiler available; disabling C++ specific optional code
Install prefix /var/tmp/qemu-build/install
BIOS directory /var/tmp/qemu-build/install/share/qemu
binary directory /var/tmp/qemu-build/install/bin
library directory /var/tmp/qemu-build/install/lib
module directory /var/tmp/qemu-build/install/lib/qemu
libexec directory /var/tmp/qemu-build/install/libexec
include directory /var/tmp/qemu-build/install/include
config directory /var/tmp/qemu-build/install/etc
local state directory /var/tmp/qemu-build/install/var
Manual directory /var/tmp/qemu-build/install/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path /tmp/qemu-test/src
C compiler cc
Host C compiler cc
C++ compiler
Objective-C compiler cc
ARFLAGS rv
CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
QEMU_CFLAGS -I/usr/include/pixman-1 -pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wendif-labels -Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-all
LDFLAGS -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
make make
install install
python python -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
tcg debug enabled no
gprof enabled no
sparse enabled no
strip binaries yes
profiler no
static build no
pixman system
SDL support yes (1.2.14)
GTK support no
GTK GL support no
VTE support no
TLS priority NORMAL
GNUTLS support no
GNUTLS rnd no
libgcrypt no
libgcrypt kdf no
nettle no
nettle kdf no
libtasn1 no
curses support no
virgl support no
curl support no
mingw32 support no
Audio drivers oss
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
VNC support yes
VNC SASL support no
VNC JPEG support no
VNC PNG support no
xen support no
brlapi support no
bluez support no
Documentation no
PIE yes
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support yes
Install blobs yes
KVM support yes
COLO support yes
RDMA support no
TCG interpreter no
fdt support yes
preadv support yes
fdatasync yes
madvise yes
posix_madvise yes
libcap-ng support no
vhost-net support yes
vhost-scsi support yes
vhost-vsock support yes
Trace backends log
spice support no
rbd support no
xfsctl support no
smartcard support no
libusb no
usb net redir no
OpenGL support no
OpenGL dmabufs no
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info no
QGA MSI support no
seccomp support no
coroutine backend ucontext
coroutine pool yes
debug stack usage no
GlusterFS support no
Archipelago support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support no
TPM passthrough yes
QOM debugging yes
lzo support no
snappy support no
bzip2 support no
NUMA host support no
tcmalloc support no
jemalloc support no
avx2 optimization no
replication support yes
GEN x86_64-softmmu/config-devices.mak.tmp
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qemu-options.def
GEN qmp-commands.h
GEN qapi-types.h
GEN qapi-visit.h
GEN qapi-event.h
GEN qmp-introspect.h
GEN x86_64-softmmu/config-devices.mak
GEN aarch64-softmmu/config-devices.mak
GEN module_block.h
GEN tests/test-qapi-types.h
GEN tests/test-qapi-visit.h
GEN tests/test-qmp-commands.h
GEN tests/test-qapi-event.h
GEN tests/test-qmp-introspect.h
GEN config-all-devices.mak
GEN trace/generated-tracers.h
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
CC tests/qemu-iotests/socket_scm_helper.o
GEN qga/qapi-generated/qga-qapi-types.h
GEN qga/qapi-generated/qga-qapi-visit.h
GEN qga/qapi-generated/qga-qmp-commands.h
GEN qga/qapi-generated/qga-qapi-types.c
GEN qga/qapi-generated/qga-qapi-visit.c
GEN qga/qapi-generated/qga-qmp-marshal.c
GEN qmp-introspect.c
GEN qapi-event.c
GEN qapi-types.c
GEN qapi-visit.c
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qmp-registry.o
CC qapi/qmp-dispatch.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qint.o
CC qobject/qstring.o
CC qobject/qdict.o
CC qobject/qlist.o
CC qobject/qfloat.o
CC qobject/qbool.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
GEN trace/generated-tracers.c
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/bufferiszero.o
CC util/compatfd.o
CC util/event_notifier-posix.o
CC util/mmap-alloc.o
CC util/oslib-posix.o
CC util/qemu-openpty.o
CC util/qemu-thread-posix.o
CC util/memfd.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/uri.o
CC util/notify.o
CC util/qemu-option.o
CC util/qemu-progress.o
CC util/crc32c.o
CC util/hexdump.o
CC util/uuid.o
CC util/throttle.o
CC util/getauxval.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-ucontext.o
CC util/buffer.o
CC util/timed-average.o
CC util/base64.o
CC util/log.o
CC util/qdist.o
CC util/qht.o
CC util/range.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/fdset-add-fd.o
CC stubs/fdset-find-fd.o
CC stubs/fdset-get-fd.o
CC stubs/fdset-remove-fd.o
CC stubs/gdbstub.o
CC stubs/get-fd.o
CC stubs/get-next-serial.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/mon-is-qmp.o
CC stubs/monitor-init.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/replay-user.o
CC stubs/reset.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/cpus.o
CC stubs/kvm.o
CC stubs/qmp_pc_dimm_device_list.o
CC stubs/target-monitor-defs.o
CC stubs/target-get-monitor-def.o
CC stubs/vhost.o
CC stubs/iohandler.o
CC stubs/smbios_type_38.o
CC stubs/ipmi.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/migration-colo.o
CC contrib/ivshmem-client/ivshmem-client.o
CC contrib/ivshmem-client/main.o
CC contrib/ivshmem-server/ivshmem-server.o
CC contrib/ivshmem-server/main.o
CC qemu-nbd.o
CC async.o
CC thread-pool.o
CC block.o
CC blockjob.o
CC main-loop.o
CC iohandler.o
CC qemu-timer.o
CC aio-posix.o
CC qemu-io-cmds.o
CC replication.o
CC block/raw_bsd.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qed.o
CC block/qed-gencb.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blkreplay.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/raw-posix.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/throttle-groups.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/crypto.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC crypto/init.o
CC crypto/hash.o
CC crypto/hash-glib.o
CC crypto/aes.o
CC crypto/desrfb.o
CC crypto/cipher.o
CC crypto/tlscreds.o
CC crypto/tlscredsanon.o
CC crypto/tlscredsx509.o
CC crypto/tlssession.o
CC crypto/secret.o
CC crypto/random-platform.o
CC crypto/pbkdf.o
CC crypto/ivgen.o
CC crypto/ivgen-essiv.o
CC crypto/ivgen-plain.o
CC crypto/ivgen-plain64.o
CC crypto/afsplit.o
CC crypto/xts.o
CC crypto/block.o
CC crypto/block-qcow.o
CC crypto/block-luks.o
CC io/channel-buffer.o
CC io/channel.o
CC io/channel-command.o
CC io/channel-file.o
CC io/channel-socket.o
CC io/channel-tls.o
CC io/channel-watch.o
CC io/channel-websock.o
CC io/task.o
CC io/channel-util.o
CC qom/object.o
CC qom/container.o
CC qom/qom-qobject.o
CC qom/object_interfaces.o
GEN qemu-img-cmds.h
CC qemu-io.o
CC qemu-bridge-helper.o
CC blockdev-nbd.o
CC blockdev.o
CC iothread.o
CC qdev-monitor.o
CC device-hotplug.o
CC os-posix.o
CC qemu-char.o
CC page_cache.o
CC accel.o
CC bt-host.o
CC bt-vhci.o
CC dma-helpers.o
CC vl.o
CC tpm.o
CC device_tree.o
GEN qmp-marshal.c
CC qmp.o
CC hmp.o
CC cpus-common.o
CC audio/audio.o
CC audio/noaudio.o
CC audio/wavaudio.o
CC audio/mixeng.o
CC audio/sdlaudio.o
CC audio/ossaudio.o
CC audio/wavcapture.o
CC backends/rng.o
CC backends/rng-egd.o
CC backends/rng-random.o
CC backends/msmouse.o
CC backends/testdev.o
CC backends/tpm.o
CC backends/hostmem.o
CC backends/hostmem-ram.o
CC backends/hostmem-file.o
CC backends/cryptodev.o
CC backends/cryptodev-builtin.o
CC block/stream.o
CC disas/arm.o
CC disas/i386.o
CC fsdev/qemu-fsdev-dummy.o
CC fsdev/qemu-fsdev-opts.o
CC hw/acpi/piix4.o
CC hw/acpi/core.o
CC hw/acpi/pcihp.o
CC hw/acpi/ich9.o
CC hw/acpi/tco.o
CC hw/acpi/cpu_hotplug.o
CC hw/acpi/memory_hotplug.o
CC hw/acpi/memory_hotplug_acpi_table.o
CC hw/acpi/cpu.o
CC hw/acpi/nvdimm.o
CC hw/acpi/acpi_interface.o
CC hw/acpi/bios-linker-loader.o
CC hw/acpi/aml-build.o
CC hw/acpi/ipmi.o
CC hw/audio/sb16.o
CC hw/audio/es1370.o
CC hw/audio/ac97.o
CC hw/audio/fmopl.o
CC hw/audio/adlib.o
CC hw/audio/gus.o
CC hw/audio/gusemu_hal.o
CC hw/audio/gusemu_mixer.o
CC hw/audio/cs4231a.o
CC hw/audio/intel-hda.o
CC hw/audio/hda-codec.o
CC hw/audio/pcspk.o
CC hw/audio/wm8750.o
CC hw/audio/pl041.o
CC hw/audio/lm4549.o
CC hw/audio/marvell_88w8618.o
CC hw/block/cdrom.o
CC hw/block/block.o
CC hw/block/hd-geometry.o
CC hw/block/fdc.o
CC hw/block/m25p80.o
CC hw/block/nand.o
CC hw/block/pflash_cfi01.o
CC hw/block/pflash_cfi02.o
CC hw/block/ecc.o
CC hw/block/onenand.o
CC hw/block/nvme.o
CC hw/bt/core.o
CC hw/bt/l2cap.o
CC hw/bt/sdp.o
CC hw/bt/hci.o
CC hw/bt/hid.o
CC hw/bt/hci-csr.o
CC hw/char/ipoctal232.o
CC hw/char/parallel.o
CC hw/char/pl011.o
CC hw/char/serial.o
CC hw/char/serial-isa.o
CC hw/char/serial-pci.o
CC hw/char/virtio-console.o
CC hw/char/cadence_uart.o
CC hw/char/debugcon.o
CC hw/char/imx_serial.o
CC hw/core/qdev.o
CC hw/core/qdev-properties.o
CC hw/core/bus.o
CC hw/core/fw-path-provider.o
CC hw/core/irq.o
CC hw/core/hotplug.o
CC hw/core/ptimer.o
CC hw/core/sysbus.o
CC hw/core/machine.o
CC hw/core/null-machine.o
CC hw/core/loader.o
CC hw/core/qdev-properties-system.o
CC hw/core/register.o
CC hw/core/or-irq.o
CC hw/core/platform-bus.o
CC hw/display/ads7846.o
CC hw/display/cirrus_vga.o
CC hw/display/pl110.o
CC hw/display/ssd0303.o
CC hw/display/ssd0323.o
CC hw/display/vga-pci.o
CC hw/display/vga-isa.o
CC hw/display/vmware_vga.o
CC hw/display/blizzard.o
CC hw/display/exynos4210_fimd.o
CC hw/display/framebuffer.o
CC hw/display/tc6393xb.o
CC hw/dma/pl080.o
CC hw/dma/pl330.o
CC hw/dma/i8257.o
CC hw/dma/xlnx-zynq-devcfg.o
CC hw/gpio/max7310.o
CC hw/gpio/pl061.o
CC hw/gpio/zaurus.o
CC hw/gpio/gpio_key.o
CC hw/i2c/core.o
CC hw/i2c/smbus.o
CC hw/i2c/smbus_eeprom.o
CC hw/i2c/i2c-ddc.o
CC hw/i2c/versatile_i2c.o
CC hw/i2c/smbus_ich9.o
CC hw/i2c/pm_smbus.o
CC hw/i2c/bitbang_i2c.o
CC hw/i2c/exynos4210_i2c.o
CC hw/i2c/imx_i2c.o
CC hw/i2c/aspeed_i2c.o
CC hw/ide/core.o
CC hw/ide/atapi.o
CC hw/ide/qdev.o
CC hw/ide/pci.o
CC hw/ide/isa.o
CC hw/ide/piix.o
CC hw/ide/microdrive.o
CC hw/ide/ich.o
CC hw/ide/ahci.o
CC hw/input/pckbd.o
CC hw/input/hid.o
CC hw/input/lm832x.o
CC hw/input/pl050.o
CC hw/input/ps2.o
CC hw/input/stellaris_input.o
CC hw/input/tsc2005.o
CC hw/input/vmmouse.o
CC hw/input/virtio-input.o
CC hw/input/virtio-input-hid.o
CC hw/input/virtio-input-host.o
CC hw/intc/i8259_common.o
CC hw/intc/i8259.o
CC hw/intc/pl190.o
CC hw/intc/imx_avic.o
CC hw/intc/realview_gic.o
CC hw/intc/ioapic_common.o
CC hw/intc/arm_gic_common.o
CC hw/intc/arm_gic.o
CC hw/intc/arm_gicv2m.o
CC hw/intc/arm_gicv3_common.o
CC hw/intc/arm_gicv3.o
CC hw/intc/arm_gicv3_dist.o
CC hw/intc/arm_gicv3_redist.o
CC hw/intc/arm_gicv3_its_common.o
CC hw/ipack/ipack.o
CC hw/intc/intc.o
CC hw/ipack/tpci200.o
CC hw/ipmi/ipmi.o
CC hw/ipmi/ipmi_bmc_sim.o
CC hw/ipmi/ipmi_bmc_extern.o
CC hw/ipmi/isa_ipmi_kcs.o
CC hw/ipmi/isa_ipmi_bt.o
CC hw/isa/isa-bus.o
CC hw/isa/apm.o
CC hw/mem/pc-dimm.o
CC hw/mem/nvdimm.o
CC hw/misc/applesmc.o
CC hw/misc/max111x.o
CC hw/misc/tmp105.o
CC hw/misc/debugexit.o
CC hw/misc/sga.o
CC hw/misc/pc-testdev.o
CC hw/misc/pci-testdev.o
CC hw/misc/arm_l2x0.o
CC hw/misc/arm_integrator_debug.o
CC hw/misc/a9scu.o
CC hw/misc/arm11scu.o
CC hw/net/ne2000.o
CC hw/net/eepro100.o
CC hw/net/pcnet-pci.o
CC hw/net/pcnet.o
CC hw/net/e1000.o
CC hw/net/e1000x_common.o
CC hw/net/net_tx_pkt.o
CC hw/net/net_rx_pkt.o
CC hw/net/e1000e.o
CC hw/net/e1000e_core.o
CC hw/net/rtl8139.o
CC hw/net/vmxnet3.o
CC hw/net/smc91c111.o
CC hw/net/lan9118.o
CC hw/net/ne2000-isa.o
CC hw/net/xgmac.o
CC hw/net/allwinner_emac.o
CC hw/net/imx_fec.o
CC hw/net/cadence_gem.o
CC hw/net/stellaris_enet.o
CC hw/net/rocker/rocker.o
CC hw/net/rocker/rocker_fp.o
CC hw/net/rocker/rocker_desc.o
CC hw/net/rocker/rocker_world.o
CC hw/net/rocker/rocker_of_dpa.o
CC hw/nvram/eeprom93xx.o
CC hw/nvram/fw_cfg.o
CC hw/nvram/chrp_nvram.o
CC hw/pci-bridge/pci_bridge_dev.o
CC hw/pci-bridge/pci_expander_bridge.o
CC hw/pci-bridge/xio3130_upstream.o
CC hw/pci-bridge/xio3130_downstream.o
CC hw/pci-bridge/ioh3420.o
CC hw/pci-bridge/i82801b11.o
CC hw/pci-host/pam.o
CC hw/pci-host/versatile.o
CC hw/pci-host/piix.o
CC hw/pci-host/q35.o
CC hw/pci-host/gpex.o
CC hw/pci/pci.o
CC hw/pci/pci_bridge.o
CC hw/pci/msix.o
CC hw/pci/msi.o
CC hw/pci/shpc.o
CC hw/pci/slotid_cap.o
CC hw/pci/pci_host.o
CC hw/pci/pcie_host.o
CC hw/pci/pcie.o
CC hw/pci/pcie_aer.o
CC hw/pci/pcie_port.o
CC hw/pci/pci-stub.o
CC hw/pcmcia/pcmcia.o
CC hw/scsi/scsi-disk.o
CC hw/scsi/scsi-generic.o
CC hw/scsi/scsi-bus.o
CC hw/scsi/lsi53c895a.o
CC hw/scsi/mptsas.o
CC hw/scsi/mptconfig.o
CC hw/scsi/mptendian.o
CC hw/scsi/megasas.o
CC hw/scsi/vmw_pvscsi.o
/tmp/qemu-test/src/hw/nvram/fw_cfg.c: In function ‘fw_cfg_dma_transfer’:
/tmp/qemu-test/src/hw/nvram/fw_cfg.c:329: warning: ‘read’ may be used uninitialized in this function
CC hw/scsi/esp.o
CC hw/scsi/esp-pci.o
CC hw/sd/pl181.o
CC hw/sd/ssi-sd.o
CC hw/sd/sd.o
CC hw/sd/core.o
CC hw/sd/sdhci.o
CC hw/smbios/smbios.o
CC hw/smbios/smbios_type_38.o
CC hw/ssi/pl022.o
CC hw/ssi/ssi.o
CC hw/ssi/xilinx_spips.o
CC hw/ssi/aspeed_smc.o
CC hw/ssi/stm32f2xx_spi.o
CC hw/timer/arm_timer.o
CC hw/timer/arm_mptimer.o
CC hw/timer/a9gtimer.o
CC hw/timer/cadence_ttc.o
CC hw/timer/ds1338.o
CC hw/timer/hpet.o
CC hw/timer/i8254_common.o
CC hw/timer/i8254.o
CC hw/timer/pl031.o
CC hw/timer/twl92230.o
CC hw/timer/imx_epit.o
CC hw/timer/imx_gpt.o
CC hw/timer/stm32f2xx_timer.o
CC hw/timer/aspeed_timer.o
CC hw/tpm/tpm_tis.o
CC hw/tpm/tpm_passthrough.o
CC hw/usb/core.o
CC hw/tpm/tpm_util.o
CC hw/usb/combined-packet.o
CC hw/usb/bus.o
CC hw/usb/libhw.o
CC hw/usb/desc.o
CC hw/usb/desc-msos.o
CC hw/usb/hcd-uhci.o
CC hw/usb/hcd-ohci.o
CC hw/usb/hcd-ehci.o
CC hw/usb/hcd-ehci-pci.o
CC hw/usb/hcd-ehci-sysbus.o
CC hw/usb/hcd-xhci.o
CC hw/usb/hcd-musb.o
CC hw/usb/dev-hub.o
CC hw/usb/dev-hid.o
CC hw/usb/dev-wacom.o
CC hw/usb/dev-storage.o
CC hw/usb/dev-uas.o
CC hw/usb/dev-audio.o
CC hw/usb/dev-serial.o
CC hw/usb/dev-network.o
CC hw/usb/dev-bluetooth.o
CC hw/usb/dev-smartcard-reader.o
CC hw/usb/dev-mtp.o
CC hw/usb/host-stub.o
CC hw/virtio/virtio-rng.o
CC hw/virtio/virtio-pci.o
CC hw/virtio/virtio-bus.o
CC hw/virtio/virtio-mmio.o
CC hw/watchdog/watchdog.o
CC hw/watchdog/wdt_i6300esb.o
CC hw/watchdog/wdt_ib700.o
CC migration/migration.o
CC migration/socket.o
CC migration/fd.o
CC migration/exec.o
CC migration/tls.o
CC migration/colo-comm.o
CC migration/colo.o
CC migration/colo-failover.o
CC migration/vmstate.o
CC migration/qemu-file.o
CC migration/qemu-file-channel.o
CC migration/xbzrle.o
CC migration/postcopy-ram.o
CC migration/qjson.o
CC migration/block.o
CC net/net.o
CC net/queue.o
CC net/checksum.o
CC net/util.o
CC net/hub.o
CC net/socket.o
CC net/dump.o
CC net/eth.o
CC net/l2tpv3.o
CC net/tap.o
CC net/vhost-user.o
CC net/tap-linux.o
CC net/slirp.o
CC net/filter.o
CC net/filter-buffer.o
CC net/filter-mirror.o
CC net/colo-compare.o
CC net/colo.o
CC net/filter-rewriter.o
CC qom/cpu.o
CC replay/replay.o
CC replay/replay-internal.o
CC replay/replay-events.o
CC replay/replay-time.o
CC replay/replay-input.o
CC replay/replay-char.o
CC replay/replay-snapshot.o
CC slirp/cksum.o
CC slirp/if.o
CC slirp/ip_icmp.o
CC slirp/ip6_icmp.o
CC slirp/ip6_input.o
CC slirp/ip6_output.o
CC slirp/ip_input.o
CC slirp/ip_output.o
CC slirp/dnssearch.o
CC slirp/dhcpv6.o
CC slirp/slirp.o
CC slirp/mbuf.o
CC slirp/misc.o
CC slirp/sbuf.o
CC slirp/socket.o
CC slirp/tcp_input.o
CC slirp/tcp_output.o
CC slirp/tcp_subr.o
CC slirp/tcp_timer.o
/tmp/qemu-test/src/slirp/tcp_input.c: In function ‘tcp_input’:
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_p’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_len’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_tos’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_id’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_off’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_ttl’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_sum’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_src.s_addr’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_dst.s_addr’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:220: warning: ‘save_ip6.ip_nh’ may be used uninitialized in this function
CC slirp/udp.o
/tmp/qemu-test/src/replay/replay-internal.c: In function ‘replay_put_array’:
/tmp/qemu-test/src/replay/replay-internal.c:65: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
CC slirp/udp6.o
CC slirp/bootp.o
CC slirp/tftp.o
CC slirp/arp_table.o
CC slirp/ndp_table.o
CC ui/keymaps.o
CC ui/console.o
CC ui/cursor.o
CC ui/qemu-pixman.o
CC ui/input.o
CC ui/input-keymap.o
CC ui/input-legacy.o
CC ui/input-linux.o
CC ui/sdl.o
CC ui/sdl_zoom.o
CC ui/x_keymap.o
CC ui/vnc.o
CC ui/vnc-enc-zlib.o
CC ui/vnc-enc-hextile.o
CC ui/vnc-enc-tight.o
CC ui/vnc-palette.o
CC ui/vnc-enc-zrle.o
CC ui/vnc-auth-vencrypt.o
CC ui/vnc-ws.o
CC ui/vnc-jobs.o
LINK tests/qemu-iotests/socket_scm_helper
CC qga/commands.o
AS optionrom/multiboot.o
AS optionrom/linuxboot.o
CC optionrom/linuxboot_dma.o
cc: unrecognized option '-no-integrated-as'
cc: unrecognized option '-no-integrated-as'
CC qga/guest-agent-command-state.o
CC qga/main.o
AS optionrom/kvmvapic.o
CC qga/commands-posix.o
BUILD optionrom/multiboot.img
CC qga/channel-posix.o
BUILD optionrom/linuxboot_dma.img
BUILD optionrom/linuxboot.img
CC qga/qapi-generated/qga-qapi-types.o
CC qga/qapi-generated/qga-qapi-visit.o
BUILD optionrom/multiboot.raw
CC qga/qapi-generated/qga-qmp-marshal.o
CC qmp-introspect.o
BUILD optionrom/linuxboot.raw
BUILD optionrom/linuxboot_dma.raw
CC qapi-types.o
BUILD optionrom/kvmvapic.img
CC qapi-visit.o
SIGN optionrom/multiboot.bin
SIGN optionrom/linuxboot.bin
SIGN optionrom/linuxboot_dma.bin
BUILD optionrom/kvmvapic.raw
CC qapi-event.o
AR libqemustub.a
CC qemu-img.o
SIGN optionrom/kvmvapic.bin
CC qmp-marshal.o
CC trace/generated-tracers.o
AR libqemuutil.a
LINK qemu-ga
LINK ivshmem-client
LINK ivshmem-server
LINK qemu-nbd
LINK qemu-io
LINK qemu-bridge-helper
LINK qemu-img
GEN x86_64-softmmu/hmp-commands.h
GEN x86_64-softmmu/hmp-commands-info.h
GEN x86_64-softmmu/config-target.h
GEN aarch64-softmmu/hmp-commands.h
GEN aarch64-softmmu/hmp-commands-info.h
GEN aarch64-softmmu/config-target.h
CC x86_64-softmmu/exec.o
CC x86_64-softmmu/translate-all.o
CC x86_64-softmmu/cpu-exec.o
CC x86_64-softmmu/translate-common.o
CC x86_64-softmmu/cpu-exec-common.o
CC x86_64-softmmu/tcg/tcg.o
CC x86_64-softmmu/tcg/optimize.o
CC x86_64-softmmu/tcg/tcg-op.o
CC x86_64-softmmu/tcg/tcg-common.o
CC x86_64-softmmu/fpu/softfloat.o
CC x86_64-softmmu/disas.o
CC x86_64-softmmu/tcg-runtime.o
CC x86_64-softmmu/arch_init.o
CC x86_64-softmmu/cpus.o
CC x86_64-softmmu/monitor.o
CC x86_64-softmmu/gdbstub.o
CC x86_64-softmmu/balloon.o
CC x86_64-softmmu/ioport.o
CC aarch64-softmmu/exec.o
CC aarch64-softmmu/translate-all.o
CC aarch64-softmmu/cpu-exec.o
CC x86_64-softmmu/numa.o
CC x86_64-softmmu/qtest.o
CC aarch64-softmmu/translate-common.o
CC aarch64-softmmu/cpu-exec-common.o
CC x86_64-softmmu/bootdevice.o
CC aarch64-softmmu/tcg/tcg.o
CC x86_64-softmmu/kvm-all.o
CC aarch64-softmmu/tcg/tcg-op.o
CC aarch64-softmmu/tcg/optimize.o
CC x86_64-softmmu/memory.o
CC aarch64-softmmu/tcg/tcg-common.o
CC aarch64-softmmu/fpu/softfloat.o
CC aarch64-softmmu/disas.o
CC aarch64-softmmu/tcg-runtime.o
GEN aarch64-softmmu/gdbstub-xml.c
CC x86_64-softmmu/cputlb.o
CC aarch64-softmmu/kvm-stub.o
CC x86_64-softmmu/memory_mapping.o
CC x86_64-softmmu/dump.o
CC aarch64-softmmu/arch_init.o
CC x86_64-softmmu/migration/ram.o
CC aarch64-softmmu/cpus.o
CC x86_64-softmmu/migration/savevm.o
CC aarch64-softmmu/monitor.o
CC aarch64-softmmu/gdbstub.o
CC aarch64-softmmu/balloon.o
CC aarch64-softmmu/ioport.o
CC aarch64-softmmu/numa.o
CC aarch64-softmmu/qtest.o
CC aarch64-softmmu/bootdevice.o
CC aarch64-softmmu/memory.o
CC aarch64-softmmu/cputlb.o
CC aarch64-softmmu/memory_mapping.o
CC x86_64-softmmu/xen-common-stub.o
CC x86_64-softmmu/xen-hvm-stub.o
CC x86_64-softmmu/hw/block/virtio-blk.o
CC aarch64-softmmu/dump.o
CC x86_64-softmmu/hw/block/dataplane/virtio-blk.o
CC x86_64-softmmu/hw/char/virtio-serial-bus.o
CC x86_64-softmmu/hw/core/nmi.o
CC aarch64-softmmu/migration/ram.o
CC aarch64-softmmu/migration/savevm.o
CC aarch64-softmmu/xen-common-stub.o
CC aarch64-softmmu/xen-hvm-stub.o
CC aarch64-softmmu/hw/adc/stm32f2xx_adc.o
CC aarch64-softmmu/hw/block/virtio-blk.o
CC x86_64-softmmu/hw/core/generic-loader.o
CC aarch64-softmmu/hw/block/dataplane/virtio-blk.o
CC aarch64-softmmu/hw/char/exynos4210_uart.o
CC aarch64-softmmu/hw/char/omap_uart.o
CC x86_64-softmmu/hw/cpu/core.o
CC x86_64-softmmu/hw/display/vga.o
CC x86_64-softmmu/hw/display/virtio-gpu.o
CC x86_64-softmmu/hw/display/virtio-gpu-3d.o
CC aarch64-softmmu/hw/char/digic-uart.o
CC aarch64-softmmu/hw/char/stm32f2xx_usart.o
CC x86_64-softmmu/hw/display/virtio-gpu-pci.o
CC x86_64-softmmu/hw/display/virtio-vga.o
CC aarch64-softmmu/hw/char/bcm2835_aux.o
CC aarch64-softmmu/hw/char/virtio-serial-bus.o
CC aarch64-softmmu/hw/core/nmi.o
CC aarch64-softmmu/hw/core/generic-loader.o
CC aarch64-softmmu/hw/cpu/arm11mpcore.o
CC x86_64-softmmu/hw/intc/apic.o
CC x86_64-softmmu/hw/intc/apic_common.o
CC x86_64-softmmu/hw/intc/ioapic.o
CC x86_64-softmmu/hw/isa/lpc_ich9.o
CC x86_64-softmmu/hw/misc/vmport.o
CC x86_64-softmmu/hw/misc/ivshmem.o
CC x86_64-softmmu/hw/misc/pvpanic.o
CC aarch64-softmmu/hw/cpu/realview_mpcore.o
CC aarch64-softmmu/hw/cpu/a9mpcore.o
CC aarch64-softmmu/hw/cpu/a15mpcore.o
CC x86_64-softmmu/hw/misc/edu.o
CC aarch64-softmmu/hw/cpu/core.o
CC x86_64-softmmu/hw/misc/hyperv_testdev.o
CC x86_64-softmmu/hw/net/virtio-net.o
CC x86_64-softmmu/hw/net/vhost_net.o
CC aarch64-softmmu/hw/display/omap_dss.o
CC x86_64-softmmu/hw/scsi/virtio-scsi.o
CC aarch64-softmmu/hw/display/omap_lcdc.o
CC aarch64-softmmu/hw/display/pxa2xx_lcd.o
CC aarch64-softmmu/hw/display/bcm2835_fb.o
CC x86_64-softmmu/hw/scsi/virtio-scsi-dataplane.o
CC x86_64-softmmu/hw/scsi/vhost-scsi.o
CC aarch64-softmmu/hw/display/vga.o
CC x86_64-softmmu/hw/timer/mc146818rtc.o
CC x86_64-softmmu/hw/vfio/common.o
CC x86_64-softmmu/hw/vfio/pci.o
CC aarch64-softmmu/hw/display/virtio-gpu.o
CC aarch64-softmmu/hw/display/virtio-gpu-3d.o
CC aarch64-softmmu/hw/display/virtio-gpu-pci.o
CC x86_64-softmmu/hw/vfio/pci-quirks.o
CC x86_64-softmmu/hw/vfio/platform.o
CC aarch64-softmmu/hw/display/dpcd.o
CC aarch64-softmmu/hw/display/xlnx_dp.o
CC x86_64-softmmu/hw/vfio/calxeda-xgmac.o
CC x86_64-softmmu/hw/vfio/amd-xgbe.o
CC x86_64-softmmu/hw/vfio/spapr.o
CC aarch64-softmmu/hw/dma/xlnx_dpdma.o
CC x86_64-softmmu/hw/virtio/virtio.o
CC aarch64-softmmu/hw/dma/omap_dma.o
CC aarch64-softmmu/hw/dma/soc_dma.o
CC x86_64-softmmu/hw/virtio/virtio-balloon.o
CC aarch64-softmmu/hw/dma/pxa2xx_dma.o
CC aarch64-softmmu/hw/dma/bcm2835_dma.o
CC x86_64-softmmu/hw/virtio/vhost.o
CC aarch64-softmmu/hw/gpio/omap_gpio.o
CC aarch64-softmmu/hw/gpio/imx_gpio.o
CC aarch64-softmmu/hw/i2c/omap_i2c.o
CC x86_64-softmmu/hw/virtio/vhost-backend.o
CC x86_64-softmmu/hw/virtio/vhost-user.o
CC x86_64-softmmu/hw/virtio/vhost-vsock.o
CC aarch64-softmmu/hw/input/pxa2xx_keypad.o
CC aarch64-softmmu/hw/input/tsc210x.o
CC x86_64-softmmu/hw/virtio/virtio-crypto.o
CC aarch64-softmmu/hw/intc/armv7m_nvic.o
CC aarch64-softmmu/hw/intc/exynos4210_gic.o
CC aarch64-softmmu/hw/intc/exynos4210_combiner.o
CC aarch64-softmmu/hw/intc/omap_intc.o
CC x86_64-softmmu/hw/virtio/virtio-crypto-pci.o
CC x86_64-softmmu/hw/i386/multiboot.o
CC x86_64-softmmu/hw/i386/pc.o
CC aarch64-softmmu/hw/intc/bcm2835_ic.o
CC aarch64-softmmu/hw/intc/bcm2836_control.o
CC x86_64-softmmu/hw/i386/pc_piix.o
CC aarch64-softmmu/hw/intc/allwinner-a10-pic.o
CC x86_64-softmmu/hw/i386/pc_q35.o
CC x86_64-softmmu/hw/i386/pc_sysfw.o
CC x86_64-softmmu/hw/i386/x86-iommu.o
CC aarch64-softmmu/hw/intc/aspeed_vic.o
CC aarch64-softmmu/hw/intc/arm_gicv3_cpuif.o
CC aarch64-softmmu/hw/misc/ivshmem.o
CC aarch64-softmmu/hw/misc/arm_sysctl.o
CC aarch64-softmmu/hw/misc/cbus.o
CC x86_64-softmmu/hw/i386/amd_iommu.o
CC x86_64-softmmu/hw/i386/intel_iommu.o
CC x86_64-softmmu/hw/i386/kvmvapic.o
CC aarch64-softmmu/hw/misc/exynos4210_pmu.o
CC aarch64-softmmu/hw/misc/imx_ccm.o
CC x86_64-softmmu/hw/i386/acpi-build.o
CC aarch64-softmmu/hw/misc/imx31_ccm.o
CC x86_64-softmmu/hw/i386/pci-assign-load-rom.o
CC aarch64-softmmu/hw/misc/imx25_ccm.o
CC x86_64-softmmu/hw/i386/kvm/clock.o
CC aarch64-softmmu/hw/misc/imx6_ccm.o
CC aarch64-softmmu/hw/misc/imx6_src.o
CC aarch64-softmmu/hw/misc/mst_fpga.o
CC aarch64-softmmu/hw/misc/omap_clk.o
CC x86_64-softmmu/hw/i386/kvm/apic.o
CC aarch64-softmmu/hw/misc/omap_gpmc.o
CC aarch64-softmmu/hw/misc/omap_l4.o
CC aarch64-softmmu/hw/misc/omap_sdrc.o
CC aarch64-softmmu/hw/misc/omap_tap.o
CC aarch64-softmmu/hw/misc/bcm2835_mbox.o
CC x86_64-softmmu/hw/i386/kvm/i8259.o
CC aarch64-softmmu/hw/misc/bcm2835_property.o
/tmp/qemu-test/src/hw/i386/pc_piix.c: In function ‘igd_passthrough_isa_bridge_create’:
/tmp/qemu-test/src/hw/i386/pc_piix.c:1046: warning: ‘pch_rev_id’ may be used uninitialized in this function
CC aarch64-softmmu/hw/misc/zynq_slcr.o
CC x86_64-softmmu/hw/i386/kvm/ioapic.o
CC aarch64-softmmu/hw/misc/zynq-xadc.o
CC aarch64-softmmu/hw/misc/stm32f2xx_syscfg.o
CC aarch64-softmmu/hw/misc/edu.o
CC aarch64-softmmu/hw/misc/auxbus.o
CC x86_64-softmmu/hw/i386/kvm/i8254.o
CC aarch64-softmmu/hw/misc/aspeed_scu.o
CC x86_64-softmmu/hw/i386/kvm/pci-assign.o
CC aarch64-softmmu/hw/misc/aspeed_sdmc.o
CC x86_64-softmmu/target-i386/translate.o
CC aarch64-softmmu/hw/net/virtio-net.o
CC aarch64-softmmu/hw/net/vhost_net.o
CC aarch64-softmmu/hw/pcmcia/pxa2xx.o
CC aarch64-softmmu/hw/scsi/virtio-scsi.o
CC aarch64-softmmu/hw/scsi/virtio-scsi-dataplane.o
CC x86_64-softmmu/target-i386/helper.o
CC x86_64-softmmu/target-i386/cpu.o
CC x86_64-softmmu/target-i386/bpt_helper.o
CC x86_64-softmmu/target-i386/excp_helper.o
CC aarch64-softmmu/hw/scsi/vhost-scsi.o
CC x86_64-softmmu/target-i386/fpu_helper.o
CC x86_64-softmmu/target-i386/cc_helper.o
CC aarch64-softmmu/hw/sd/omap_mmc.o
CC x86_64-softmmu/target-i386/int_helper.o
CC x86_64-softmmu/target-i386/svm_helper.o
CC x86_64-softmmu/target-i386/smm_helper.o
CC aarch64-softmmu/hw/sd/pxa2xx_mmci.o
CC x86_64-softmmu/target-i386/misc_helper.o
CC x86_64-softmmu/target-i386/mem_helper.o
CC aarch64-softmmu/hw/ssi/omap_spi.o
CC aarch64-softmmu/hw/ssi/imx_spi.o
CC x86_64-softmmu/target-i386/seg_helper.o
CC aarch64-softmmu/hw/timer/exynos4210_mct.o
CC aarch64-softmmu/hw/timer/exynos4210_pwm.o
CC x86_64-softmmu/target-i386/mpx_helper.o
CC x86_64-softmmu/target-i386/gdbstub.o
CC aarch64-softmmu/hw/timer/exynos4210_rtc.o
CC aarch64-softmmu/hw/timer/omap_gptimer.o
CC x86_64-softmmu/target-i386/machine.o
CC x86_64-softmmu/target-i386/arch_memory_mapping.o
/tmp/qemu-test/src/hw/i386/acpi-build.c: In function ‘build_append_pci_bus_devices’:
/tmp/qemu-test/src/hw/i386/acpi-build.c:501: warning: ‘notify_method’ may be used uninitialized in this function
CC x86_64-softmmu/target-i386/arch_dump.o
CC x86_64-softmmu/target-i386/monitor.o
CC x86_64-softmmu/target-i386/kvm.o
CC x86_64-softmmu/target-i386/hyperv.o
CC aarch64-softmmu/hw/timer/omap_synctimer.o
CC aarch64-softmmu/hw/timer/pxa2xx_timer.o
CC x86_64-softmmu/trace/control-target.o
GEN trace/generated-helpers.c
CC aarch64-softmmu/hw/timer/digic-timer.o
CC aarch64-softmmu/hw/timer/allwinner-a10-pit.o
CC aarch64-softmmu/hw/usb/tusb6010.o
CC aarch64-softmmu/hw/vfio/common.o
CC aarch64-softmmu/hw/vfio/pci.o
CC aarch64-softmmu/hw/vfio/pci-quirks.o
CC aarch64-softmmu/hw/vfio/platform.o
CC aarch64-softmmu/hw/vfio/calxeda-xgmac.o
CC x86_64-softmmu/trace/generated-helpers.o
CC aarch64-softmmu/hw/vfio/amd-xgbe.o
CC aarch64-softmmu/hw/vfio/spapr.o
CC aarch64-softmmu/hw/virtio/virtio.o
CC aarch64-softmmu/hw/virtio/virtio-balloon.o
CC aarch64-softmmu/hw/virtio/vhost.o
CC aarch64-softmmu/hw/virtio/vhost-backend.o
CC aarch64-softmmu/hw/virtio/vhost-user.o
CC aarch64-softmmu/hw/virtio/vhost-vsock.o
CC aarch64-softmmu/hw/virtio/virtio-crypto-pci.o
CC aarch64-softmmu/hw/virtio/virtio-crypto.o
CC aarch64-softmmu/hw/arm/boot.o
CC aarch64-softmmu/hw/arm/collie.o
CC aarch64-softmmu/hw/arm/exynos4_boards.o
CC aarch64-softmmu/hw/arm/gumstix.o
CC aarch64-softmmu/hw/arm/highbank.o
CC aarch64-softmmu/hw/arm/integratorcp.o
CC aarch64-softmmu/hw/arm/digic_boards.o
CC aarch64-softmmu/hw/arm/mainstone.o
CC aarch64-softmmu/hw/arm/musicpal.o
CC aarch64-softmmu/hw/arm/omap_sx1.o
CC aarch64-softmmu/hw/arm/nseries.o
CC aarch64-softmmu/hw/arm/palm.o
CC aarch64-softmmu/hw/arm/realview.o
CC aarch64-softmmu/hw/arm/spitz.o
CC aarch64-softmmu/hw/arm/stellaris.o
CC aarch64-softmmu/hw/arm/tosa.o
CC aarch64-softmmu/hw/arm/versatilepb.o
CC aarch64-softmmu/hw/arm/vexpress.o
CC aarch64-softmmu/hw/arm/virt.o
CC aarch64-softmmu/hw/arm/xilinx_zynq.o
CC aarch64-softmmu/hw/arm/z2.o
CC aarch64-softmmu/hw/arm/virt-acpi-build.o
CC aarch64-softmmu/hw/arm/netduino2.o
CC aarch64-softmmu/hw/arm/sysbus-fdt.o
CC aarch64-softmmu/hw/arm/armv7m.o
CC aarch64-softmmu/hw/arm/exynos4210.o
CC aarch64-softmmu/hw/arm/pxa2xx.o
LINK x86_64-softmmu/qemu-system-x86_64
CC aarch64-softmmu/hw/arm/pxa2xx_gpio.o
CC aarch64-softmmu/hw/arm/pxa2xx_pic.o
CC aarch64-softmmu/hw/arm/digic.o
CC aarch64-softmmu/hw/arm/omap1.o
CC aarch64-softmmu/hw/arm/omap2.o
CC aarch64-softmmu/hw/arm/strongarm.o
CC aarch64-softmmu/hw/arm/allwinner-a10.o
CC aarch64-softmmu/hw/arm/cubieboard.o
CC aarch64-softmmu/hw/arm/bcm2835_peripherals.o
CC aarch64-softmmu/hw/arm/bcm2836.o
CC aarch64-softmmu/hw/arm/raspi.o
CC aarch64-softmmu/hw/arm/stm32f205_soc.o
CC aarch64-softmmu/hw/arm/xlnx-zynqmp.o
CC aarch64-softmmu/hw/arm/xlnx-ep108.o
CC aarch64-softmmu/hw/arm/fsl-imx25.o
CC aarch64-softmmu/hw/arm/imx25_pdk.o
CC aarch64-softmmu/hw/arm/fsl-imx31.o
CC aarch64-softmmu/hw/arm/kzm.o
CC aarch64-softmmu/hw/arm/fsl-imx6.o
CC aarch64-softmmu/hw/arm/sabrelite.o
CC aarch64-softmmu/hw/arm/aspeed_soc.o
CC aarch64-softmmu/hw/arm/aspeed.o
CC aarch64-softmmu/target-arm/arm-semi.o
CC aarch64-softmmu/target-arm/machine.o
CC aarch64-softmmu/target-arm/psci.o
CC aarch64-softmmu/target-arm/arch_dump.o
CC aarch64-softmmu/target-arm/monitor.o
CC aarch64-softmmu/target-arm/kvm-stub.o
CC aarch64-softmmu/target-arm/translate.o
CC aarch64-softmmu/target-arm/op_helper.o
CC aarch64-softmmu/target-arm/helper.o
CC aarch64-softmmu/target-arm/cpu.o
CC aarch64-softmmu/target-arm/neon_helper.o
CC aarch64-softmmu/target-arm/iwmmxt_helper.o
CC aarch64-softmmu/target-arm/cpu64.o
CC aarch64-softmmu/target-arm/gdbstub.o
CC aarch64-softmmu/target-arm/translate-a64.o
CC aarch64-softmmu/target-arm/helper-a64.o
CC aarch64-softmmu/target-arm/gdbstub64.o
CC aarch64-softmmu/target-arm/crypto_helper.o
/tmp/qemu-test/src/target-arm/translate-a64.c: In function ‘handle_shri_with_rndacc’:
/tmp/qemu-test/src/target-arm/translate-a64.c:6395: warning: ‘tcg_src_hi’ may be used uninitialized in this function
/tmp/qemu-test/src/target-arm/translate-a64.c: In function ‘disas_simd_scalar_two_reg_misc’:
/tmp/qemu-test/src/target-arm/translate-a64.c:8122: warning: ‘rmode’ may be used uninitialized in this function
CC aarch64-softmmu/target-arm/arm-powerctl.o
CC aarch64-softmmu/trace/control-target.o
GEN trace/generated-helpers.c
CC aarch64-softmmu/gdbstub-xml.o
CC aarch64-softmmu/trace/generated-helpers.o
LINK aarch64-softmmu/qemu-system-aarch64
TEST tests/qapi-schema/alternate-any.out
TEST tests/qapi-schema/alternate-array.out
TEST tests/qapi-schema/alternate-base.out
TEST tests/qapi-schema/alternate-clash.out
TEST tests/qapi-schema/alternate-conflict-dict.out
TEST tests/qapi-schema/alternate-conflict-string.out
TEST tests/qapi-schema/alternate-empty.out
TEST tests/qapi-schema/alternate-nested.out
TEST tests/qapi-schema/alternate-unknown.out
TEST tests/qapi-schema/args-alternate.out
TEST tests/qapi-schema/args-any.out
TEST tests/qapi-schema/args-array-empty.out
TEST tests/qapi-schema/args-array-unknown.out
TEST tests/qapi-schema/args-bad-boxed.out
TEST tests/qapi-schema/args-boxed-anon.out
TEST tests/qapi-schema/args-boxed-empty.out
TEST tests/qapi-schema/args-boxed-string.out
TEST tests/qapi-schema/args-int.out
TEST tests/qapi-schema/args-invalid.out
TEST tests/qapi-schema/args-member-array-bad.out
TEST tests/qapi-schema/args-member-case.out
TEST tests/qapi-schema/args-member-unknown.out
TEST tests/qapi-schema/args-name-clash.out
TEST tests/qapi-schema/args-union.out
TEST tests/qapi-schema/args-unknown.out
TEST tests/qapi-schema/bad-base.out
TEST tests/qapi-schema/bad-data.out
TEST tests/qapi-schema/bad-ident.out
TEST tests/qapi-schema/bad-type-bool.out
TEST tests/qapi-schema/bad-type-dict.out
TEST tests/qapi-schema/bad-type-int.out
TEST tests/qapi-schema/base-cycle-direct.out
TEST tests/qapi-schema/base-cycle-indirect.out
TEST tests/qapi-schema/command-int.out
TEST tests/qapi-schema/double-data.out
TEST tests/qapi-schema/comments.out
TEST tests/qapi-schema/double-type.out
TEST tests/qapi-schema/duplicate-key.out
TEST tests/qapi-schema/enum-bad-name.out
TEST tests/qapi-schema/empty.out
TEST tests/qapi-schema/enum-bad-prefix.out
TEST tests/qapi-schema/enum-clash-member.out
TEST tests/qapi-schema/enum-dict-member.out
TEST tests/qapi-schema/enum-member-case.out
TEST tests/qapi-schema/enum-int-member.out
TEST tests/qapi-schema/enum-wrong-data.out
TEST tests/qapi-schema/escape-outside-string.out
TEST tests/qapi-schema/enum-missing-data.out
TEST tests/qapi-schema/escape-too-big.out
TEST tests/qapi-schema/escape-too-short.out
TEST tests/qapi-schema/event-boxed-empty.out
TEST tests/qapi-schema/event-case.out
TEST tests/qapi-schema/event-nest-struct.out
TEST tests/qapi-schema/flat-union-array-branch.out
TEST tests/qapi-schema/flat-union-bad-base.out
TEST tests/qapi-schema/flat-union-base-any.out
TEST tests/qapi-schema/flat-union-bad-discriminator.out
TEST tests/qapi-schema/flat-union-base-union.out
TEST tests/qapi-schema/flat-union-incomplete-branch.out
TEST tests/qapi-schema/flat-union-clash-member.out
TEST tests/qapi-schema/flat-union-empty.out
TEST tests/qapi-schema/flat-union-inline.out
TEST tests/qapi-schema/flat-union-int-branch.out
TEST tests/qapi-schema/flat-union-invalid-branch-key.out
TEST tests/qapi-schema/flat-union-invalid-discriminator.out
TEST tests/qapi-schema/flat-union-optional-discriminator.out
TEST tests/qapi-schema/flat-union-no-base.out
TEST tests/qapi-schema/flat-union-string-discriminator.out
TEST tests/qapi-schema/funny-char.out
TEST tests/qapi-schema/ident-with-escape.out
TEST tests/qapi-schema/include-before-err.out
TEST tests/qapi-schema/include-cycle.out
TEST tests/qapi-schema/include-nested-err.out
TEST tests/qapi-schema/include-format-err.out
TEST tests/qapi-schema/include-no-file.out
TEST tests/qapi-schema/include-non-file.out
TEST tests/qapi-schema/include-relpath.out
TEST tests/qapi-schema/include-repetition.out
TEST tests/qapi-schema/include-self-cycle.out
TEST tests/qapi-schema/include-simple.out
TEST tests/qapi-schema/indented-expr.out
TEST tests/qapi-schema/leading-comma-list.out
TEST tests/qapi-schema/leading-comma-object.out
TEST tests/qapi-schema/missing-colon.out
TEST tests/qapi-schema/missing-comma-list.out
TEST tests/qapi-schema/missing-type.out
TEST tests/qapi-schema/missing-comma-object.out
TEST tests/qapi-schema/nested-struct-data.out
TEST tests/qapi-schema/qapi-schema-test.out
TEST tests/qapi-schema/non-objects.out
TEST tests/qapi-schema/quoted-structural-chars.out
TEST tests/qapi-schema/redefined-builtin.out
TEST tests/qapi-schema/redefined-command.out
TEST tests/qapi-schema/redefined-event.out
TEST tests/qapi-schema/redefined-type.out
TEST tests/qapi-schema/reserved-command-q.out
TEST tests/qapi-schema/reserved-enum-q.out
TEST tests/qapi-schema/reserved-member-has.out
TEST tests/qapi-schema/reserved-member-q.out
TEST tests/qapi-schema/reserved-member-u.out
TEST tests/qapi-schema/reserved-member-underscore.out
TEST tests/qapi-schema/reserved-type-kind.out
TEST tests/qapi-schema/reserved-type-list.out
TEST tests/qapi-schema/returns-alternate.out
TEST tests/qapi-schema/returns-array-bad.out
TEST tests/qapi-schema/returns-unknown.out
TEST tests/qapi-schema/returns-dict.out
TEST tests/qapi-schema/returns-whitelist.out
TEST tests/qapi-schema/struct-base-clash-deep.out
TEST tests/qapi-schema/struct-base-clash.out
TEST tests/qapi-schema/struct-data-invalid.out
TEST tests/qapi-schema/struct-member-invalid.out
TEST tests/qapi-schema/trailing-comma-list.out
TEST tests/qapi-schema/trailing-comma-object.out
TEST tests/qapi-schema/type-bypass-bad-gen.out
TEST tests/qapi-schema/unclosed-list.out
TEST tests/qapi-schema/unclosed-object.out
TEST tests/qapi-schema/unicode-str.out
TEST tests/qapi-schema/unclosed-string.out
TEST tests/qapi-schema/union-base-no-discriminator.out
TEST tests/qapi-schema/union-branch-case.out
TEST tests/qapi-schema/union-clash-branches.out
TEST tests/qapi-schema/union-empty.out
TEST tests/qapi-schema/union-invalid-base.out
TEST tests/qapi-schema/union-optional-branch.out
TEST tests/qapi-schema/union-unknown.out
TEST tests/qapi-schema/unknown-expr-key.out
TEST tests/qapi-schema/unknown-escape.out
CC tests/check-qdict.o
CC tests/test-char.o
CC tests/check-qfloat.o
CC tests/check-qint.o
CC tests/check-qstring.o
CC tests/check-qnull.o
CC tests/check-qlist.o
CC tests/check-qjson.o
CC tests/test-qobject-output-visitor.o
GEN tests/test-qapi-visit.c
GEN tests/test-qapi-types.c
GEN tests/test-qapi-event.c
GEN tests/test-qmp-introspect.c
CC tests/test-clone-visitor.o
CC tests/test-qobject-input-visitor.o
CC tests/test-qobject-input-strict.o
CC tests/test-qmp-commands.o
GEN tests/test-qmp-marshal.c
CC tests/test-string-input-visitor.o
CC tests/test-string-output-visitor.o
CC tests/test-qmp-event.o
CC tests/test-opts-visitor.o
CC tests/test-coroutine.o
CC tests/test-visitor-serialization.o
CC tests/test-aio.o
CC tests/test-iov.o
CC tests/test-throttle.o
CC tests/test-thread-pool.o
CC tests/test-hbitmap.o
CC tests/test-blockjob.o
CC tests/test-blockjob-txn.o
CC tests/test-x86-cpuid.o
CC tests/test-xbzrle.o
CC tests/test-vmstate.o
CC tests/test-cutils.o
CC tests/test-mul64.o
CC tests/test-int128.o
CC tests/rcutorture.o
CC tests/test-rcu-list.o
CC tests/test-qdist.o
/tmp/qemu-test/src/tests/test-int128.c:180: warning: ‘__noclone__’ attribute directive ignored
CC tests/test-qht.o
CC tests/test-qht-par.o
CC tests/qht-bench.o
CC tests/test-bitops.o
CC tests/check-qom-interface.o
CC tests/test-write-threshold.o
CC tests/check-qom-proplist.o
CC tests/test-qemu-opts.o
CC tests/test-crypto-hash.o
CC tests/test-crypto-cipher.o
CC tests/test-crypto-secret.o
CC tests/test-qga.o
CC tests/libqtest.o
CC tests/test-timed-average.o
CC tests/test-io-task.o
CC tests/test-io-channel-socket.o
CC tests/io-channel-helpers.o
CC tests/test-io-channel-file.o
CC tests/test-io-channel-command.o
CC tests/test-io-channel-buffer.o
CC tests/test-base64.o
CC tests/test-crypto-ivgen.o
CC tests/test-crypto-afsplit.o
CC tests/test-crypto-xts.o
CC tests/test-crypto-block.o
CC tests/test-logging.o
CC tests/test-replication.o
CC tests/test-bufferiszero.o
CC tests/test-uuid.o
CC tests/ptimer-test.o
CC tests/ptimer-test-stubs.o
CC tests/vhost-user-test.o
CC tests/libqos/pci.o
CC tests/libqos/fw_cfg.o
CC tests/libqos/malloc.o
CC tests/libqos/i2c.o
CC tests/libqos/libqos.o
CC tests/libqos/malloc-spapr.o
CC tests/libqos/libqos-spapr.o
CC tests/libqos/rtas.o
CC tests/libqos/pci-spapr.o
CC tests/libqos/pci-pc.o
CC tests/libqos/malloc-pc.o
CC tests/libqos/libqos-pc.o
CC tests/libqos/ahci.o
CC tests/libqos/virtio.o
CC tests/libqos/virtio-pci.o
CC tests/libqos/virtio-mmio.o
CC tests/libqos/malloc-generic.o
CC tests/endianness-test.o
CC tests/fdc-test.o
CC tests/ide-test.o
CC tests/ahci-test.o
CC tests/hd-geo-test.o
/tmp/qemu-test/src/tests/ide-test.c: In function ‘cdrom_pio_impl’:
/tmp/qemu-test/src/tests/ide-test.c:791: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
/tmp/qemu-test/src/tests/ide-test.c: In function ‘test_cdrom_dma’:
/tmp/qemu-test/src/tests/ide-test.c:886: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
CC tests/boot-order-test.o
CC tests/bios-tables-test.o
CC tests/boot-sector.o
CC tests/boot-serial-test.o
CC tests/pxe-test.o
CC tests/rtc-test.o
CC tests/ipmi-kcs-test.o
CC tests/ipmi-bt-test.o
CC tests/fw_cfg-test.o
CC tests/i440fx-test.o
CC tests/drive_del-test.o
CC tests/wdt_ib700-test.o
CC tests/tco-test.o
CC tests/e1000-test.o
CC tests/e1000e-test.o
CC tests/rtl8139-test.o
CC tests/pcnet-test.o
CC tests/eepro100-test.o
CC tests/nvme-test.o
CC tests/ne2000-test.o
CC tests/ac97-test.o
CC tests/virtio-net-test.o
CC tests/es1370-test.o
CC tests/virtio-balloon-test.o
CC tests/virtio-blk-test.o
CC tests/virtio-rng-test.o
CC tests/virtio-scsi-test.o
CC tests/virtio-serial-test.o
CC tests/tpci200-test.o
CC tests/virtio-console-test.o
CC tests/ipoctal232-test.o
CC tests/display-vga-test.o
CC tests/intel-hda-test.o
CC tests/ivshmem-test.o
CC tests/vmxnet3-test.o
CC tests/pvpanic-test.o
CC tests/i82801b11-test.o
CC tests/ioh3420-test.o
CC tests/usb-hcd-ohci-test.o
CC tests/libqos/usb.o
CC tests/usb-hcd-uhci-test.o
CC tests/usb-hcd-ehci-test.o
CC tests/usb-hcd-xhci-test.o
CC tests/pc-cpu-test.o
CC tests/q35-test.o
CC tests/test-netfilter.o
CC tests/test-filter-mirror.o
CC tests/test-filter-redirector.o
CC tests/postcopy-test.o
CC tests/test-x86-cpuid-compat.o
CC tests/device-introspect-test.o
CC tests/qom-test.o
LINK tests/check-qdict
LINK tests/test-char
LINK tests/check-qfloat
LINK tests/check-qint
LINK tests/check-qstring
LINK tests/check-qlist
LINK tests/check-qnull
LINK tests/check-qjson
CC tests/test-qapi-types.o
CC tests/test-qapi-visit.o
CC tests/test-qapi-event.o
CC tests/test-qmp-introspect.o
CC tests/test-qmp-marshal.o
LINK tests/test-coroutine
LINK tests/test-visitor-serialization
LINK tests/test-iov
LINK tests/test-aio
LINK tests/test-throttle
LINK tests/test-thread-pool
LINK tests/test-hbitmap
LINK tests/test-blockjob
LINK tests/test-blockjob-txn
LINK tests/test-x86-cpuid
LINK tests/test-xbzrle
LINK tests/test-vmstate
LINK tests/test-cutils
LINK tests/test-mul64
LINK tests/test-int128
LINK tests/rcutorture
LINK tests/test-rcu-list
LINK tests/test-qdist
LINK tests/test-qht
LINK tests/qht-bench
LINK tests/test-bitops
LINK tests/check-qom-interface
LINK tests/check-qom-proplist
LINK tests/test-qemu-opts
LINK tests/test-write-threshold
LINK tests/test-crypto-hash
LINK tests/test-crypto-cipher
LINK tests/test-crypto-secret
LINK tests/test-qga
LINK tests/test-timed-average
LINK tests/test-io-task
LINK tests/test-io-channel-socket
LINK tests/test-io-channel-file
LINK tests/test-io-channel-command
LINK tests/test-io-channel-buffer
LINK tests/test-base64
LINK tests/test-crypto-ivgen
LINK tests/test-crypto-afsplit
LINK tests/test-crypto-xts
LINK tests/test-crypto-block
LINK tests/test-logging
LINK tests/test-replication
LINK tests/test-bufferiszero
LINK tests/test-uuid
LINK tests/ptimer-test
LINK tests/vhost-user-test
LINK tests/endianness-test
LINK tests/fdc-test
LINK tests/ide-test
LINK tests/ahci-test
LINK tests/hd-geo-test
LINK tests/boot-order-test
LINK tests/bios-tables-test
LINK tests/boot-serial-test
LINK tests/pxe-test
LINK tests/rtc-test
LINK tests/ipmi-kcs-test
LINK tests/ipmi-bt-test
LINK tests/i440fx-test
LINK tests/fw_cfg-test
LINK tests/drive_del-test
LINK tests/wdt_ib700-test
LINK tests/tco-test
LINK tests/e1000-test
LINK tests/e1000e-test
LINK tests/rtl8139-test
LINK tests/pcnet-test
LINK tests/eepro100-test
LINK tests/ne2000-test
LINK tests/nvme-test
LINK tests/ac97-test
LINK tests/es1370-test
LINK tests/virtio-net-test
LINK tests/virtio-balloon-test
LINK tests/virtio-blk-test
LINK tests/virtio-rng-test
LINK tests/virtio-scsi-test
LINK tests/virtio-serial-test
LINK tests/virtio-console-test
LINK tests/tpci200-test
LINK tests/ipoctal232-test
LINK tests/display-vga-test
LINK tests/intel-hda-test
LINK tests/ivshmem-test
LINK tests/vmxnet3-test
LINK tests/pvpanic-test
LINK tests/i82801b11-test
LINK tests/ioh3420-test
LINK tests/usb-hcd-ohci-test
LINK tests/usb-hcd-uhci-test
LINK tests/usb-hcd-ehci-test
LINK tests/usb-hcd-xhci-test
LINK tests/pc-cpu-test
LINK tests/q35-test
LINK tests/test-netfilter
LINK tests/test-filter-mirror
LINK tests/test-filter-redirector
LINK tests/postcopy-test
LINK tests/test-x86-cpuid-compat
LINK tests/device-introspect-test
LINK tests/qom-test
GTESTER tests/check-qfloat
GTESTER tests/check-qdict
GTESTER tests/test-char
GTESTER tests/check-qint
GTESTER tests/check-qstring
GTESTER tests/check-qlist
GTESTER tests/check-qnull
GTESTER tests/check-qjson
LINK tests/test-qobject-output-visitor
LINK tests/test-clone-visitor
LINK tests/test-qobject-input-visitor
LINK tests/test-qobject-input-strict
LINK tests/test-qmp-commands
LINK tests/test-string-input-visitor
LINK tests/test-string-output-visitor
LINK tests/test-qmp-event
LINK tests/test-opts-visitor
GTESTER tests/test-coroutine
GTESTER tests/test-throttle
GTESTER tests/test-visitor-serialization
GTESTER tests/test-iov
GTESTER tests/test-aio
GTESTER tests/test-thread-pool
GTESTER tests/test-hbitmap
GTESTER tests/test-blockjob
GTESTER tests/test-blockjob-txn
GTESTER tests/test-x86-cpuid
GTESTER tests/test-xbzrle
GTESTER tests/test-vmstate
GTESTER tests/test-cutils
GTESTER tests/test-mul64
GTESTER tests/test-int128
GTESTER tests/rcutorture
Failed to load simple/primitive:b_1
Failed to load simple/primitive:i64_2
Failed to load simple/primitive:i32_1
Failed to load simple/primitive:i32_1
GTESTER tests/test-rcu-list
GTESTER tests/test-qdist
GTESTER tests/test-qht
LINK tests/test-qht-par
GTESTER tests/test-bitops
GTESTER tests/check-qom-interface
GTESTER tests/check-qom-proplist
GTESTER tests/test-qemu-opts
GTESTER tests/test-write-threshold
GTESTER tests/test-crypto-cipher
GTESTER tests/test-crypto-secret
GTESTER tests/test-crypto-hash
GTESTER tests/test-qga
GTESTER tests/test-timed-average
GTESTER tests/test-io-task
GTESTER tests/test-io-channel-socket
GTESTER tests/test-io-channel-file
GTESTER tests/test-io-channel-command
GTESTER tests/test-io-channel-buffer
GTESTER tests/test-base64
GTESTER tests/test-crypto-ivgen
GTESTER tests/test-crypto-afsplit
GTESTER tests/test-crypto-xts
GTESTER tests/test-crypto-block
GTESTER tests/test-logging
GTESTER tests/test-replication
GTESTER tests/test-bufferiszero
GTESTER tests/test-uuid
GTESTER tests/ptimer-test
GTESTER check-qtest-x86_64
GTESTER check-qtest-aarch64
GTESTER tests/test-qobject-output-visitor
GTESTER tests/test-clone-visitor
GTESTER tests/test-qobject-input-visitor
GTESTER tests/test-qobject-input-strict
GTESTER tests/test-qmp-commands
GTESTER tests/test-string-input-visitor
GTESTER tests/test-string-output-visitor
GTESTER tests/test-qmp-event
GTESTER tests/test-opts-visitor
GTESTER tests/test-qht-par
ftruncate: Permission denied
ftruncate: Permission denied
ftruncate: Permission denied
**
ERROR:/tmp/qemu-test/src/tests/vhost-user-test.c:668:test_migrate: assertion failed: (qdict_haskey(rsp, "return"))
GTester: last random seed: R02S5a436573dbd82aba6bbaf6d326c3147b
ftruncate: Permission denied
ftruncate: Permission denied
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-c68h00s0/src'
BUILD fedora
make[1]: Entering directory `/var/tmp/patchew-tester-tmp-c68h00s0/src'
ARCHIVE qemu.tgz
ARCHIVE dtc.tgz
COPY RUNNER
RUN test-mingw in qemu:fedora
Packages installed:
PyYAML-3.11-12.fc24.x86_64
SDL-devel-1.2.15-21.fc24.x86_64
bc-1.06.95-16.fc24.x86_64
bison-3.0.4-4.fc24.x86_64
ccache-3.3.2-1.fc24.x86_64
clang-3.8.0-2.fc24.x86_64
findutils-4.6.0-7.fc24.x86_64
flex-2.6.0-2.fc24.x86_64
gcc-6.2.1-2.fc24.x86_64
gcc-c++-6.2.1-2.fc24.x86_64
git-2.7.4-3.fc24.x86_64
glib2-devel-2.48.2-1.fc24.x86_64
libfdt-devel-1.4.2-1.fc24.x86_64
make-4.1-5.fc24.x86_64
mingw32-SDL-1.2.15-7.fc24.noarch
mingw32-bzip2-1.0.6-7.fc24.noarch
mingw32-curl-7.47.0-1.fc24.noarch
mingw32-glib2-2.48.2-1.fc24.noarch
mingw32-gmp-6.1.0-1.fc24.noarch
mingw32-gnutls-3.4.14-1.fc24.noarch
mingw32-gtk2-2.24.31-1.fc24.noarch
mingw32-gtk3-3.20.9-1.fc24.noarch
mingw32-libjpeg-turbo-1.5.0-1.fc24.noarch
mingw32-libpng-1.6.23-1.fc24.noarch
mingw32-libssh2-1.4.3-5.fc24.noarch
mingw32-libtasn1-4.5-2.fc24.noarch
mingw32-nettle-3.2-1.fc24.noarch
mingw32-pixman-0.34.0-1.fc24.noarch
mingw32-pkg-config-0.28-6.fc24.x86_64
mingw64-SDL-1.2.15-7.fc24.noarch
mingw64-bzip2-1.0.6-7.fc24.noarch
mingw64-curl-7.47.0-1.fc24.noarch
mingw64-glib2-2.48.2-1.fc24.noarch
mingw64-gmp-6.1.0-1.fc24.noarch
mingw64-gnutls-3.4.14-1.fc24.noarch
mingw64-gtk2-2.24.31-1.fc24.noarch
mingw64-gtk3-3.20.9-1.fc24.noarch
mingw64-libjpeg-turbo-1.5.0-1.fc24.noarch
mingw64-libpng-1.6.23-1.fc24.noarch
mingw64-libssh2-1.4.3-5.fc24.noarch
mingw64-libtasn1-4.5-2.fc24.noarch
mingw64-nettle-3.2-1.fc24.noarch
mingw64-pixman-0.34.0-1.fc24.noarch
mingw64-pkg-config-0.28-6.fc24.x86_64
perl-5.22.2-362.fc24.x86_64
pixman-devel-0.34.0-2.fc24.x86_64
sparse-0.5.0-7.fc24.x86_64
tar-1.28-7.fc24.x86_64
which-2.20-13.fc24.x86_64
zlib-devel-1.2.8-10.fc24.x86_64
Environment variables:
PACKAGES=ccache git tar PyYAML sparse flex bison glib2-devel pixman-devel zlib-devel SDL-devel libfdt-devel gcc gcc-c++ clang make perl which bc findutils mingw32-pixman mingw32-glib2 mingw32-gmp mingw32-SDL mingw32-pkg-config mingw32-gtk2 mingw32-gtk3 mingw32-gnutls mingw32-nettle mingw32-libtasn1 mingw32-libjpeg-turbo mingw32-libpng mingw32-curl mingw32-libssh2 mingw32-bzip2 mingw64-pixman mingw64-glib2 mingw64-gmp mingw64-SDL mingw64-pkg-config mingw64-gtk2 mingw64-gtk3 mingw64-gnutls mingw64-nettle mingw64-libtasn1 mingw64-libjpeg-turbo mingw64-libpng mingw64-curl mingw64-libssh2 mingw64-bzip2
HOSTNAME=
TERM=xterm
MAKEFLAGS= -j16
HISTSIZE=1000
J=16
USER=root
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
MAIL=/var/spool/mail/root
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
TARGET_LIST=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
TEST_DIR=/tmp/qemu-test
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
FEATURES=mingw clang pyyaml dtc
DEBUG=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/var/tmp/qemu-build/install --cross-prefix=x86_64-w64-mingw32- --enable-trace-backends=simple --enable-debug --enable-gnutls --enable-nettle --enable-curl --enable-vnc --enable-bzip2 --enable-guest-agent --with-sdlabi=1.2 --with-gtkabi=2.0
Install prefix /var/tmp/qemu-build/install
BIOS directory /var/tmp/qemu-build/install
binary directory /var/tmp/qemu-build/install
library directory /var/tmp/qemu-build/install/lib
module directory /var/tmp/qemu-build/install/lib
libexec directory /var/tmp/qemu-build/install/libexec
include directory /var/tmp/qemu-build/install/include
config directory /var/tmp/qemu-build/install
local state directory queried at runtime
Windows SDK no
Source path /tmp/qemu-test/src
C compiler x86_64-w64-mingw32-gcc
Host C compiler cc
C++ compiler x86_64-w64-mingw32-g++
Objective-C compiler clang
ARFLAGS rv
CFLAGS -g
QEMU_CFLAGS -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -Werror -mms-bitfields -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/glib-2.0 -I/usr/x86_64-w64-mingw32/sys-root/mingw/lib/glib-2.0/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -m64 -mcx16 -mthreads -D__USE_MINGW_ANSI_STDIO=1 -DWIN32_LEAN_AND_MEAN -DWINVER=0x501 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wendif-labels -Wno-shift-negative-value -Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/p11-kit-1 -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/libpng16
LDFLAGS -Wl,--nxcompat -Wl,--no-seh -Wl,--dynamicbase -Wl,--warn-common -m64 -g
make make
install install
python python -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
tcg debug enabled yes
gprof enabled no
sparse enabled no
strip binaries no
profiler no
static build no
pixman system
SDL support yes (1.2.15)
GTK support yes (2.24.31)
GTK GL support no
VTE support no
TLS priority NORMAL
GNUTLS support yes
GNUTLS rnd yes
libgcrypt no
libgcrypt kdf no
nettle yes (3.2)
nettle kdf yes
libtasn1 yes
curses support no
virgl support no
curl support yes
mingw32 support yes
Audio drivers dsound
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
VNC support yes
VNC SASL support no
VNC JPEG support yes
VNC PNG support yes
xen support no
brlapi support no
bluez support no
Documentation no
PIE no
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support no
Install blobs yes
KVM support no
COLO support yes
RDMA support no
TCG interpreter no
fdt support yes
preadv support no
fdatasync no
madvise no
posix_madvise no
libcap-ng support no
vhost-net support no
vhost-scsi support no
vhost-vsock support no
Trace backends simple
Trace output file trace-<pid>
spice support no
rbd support no
xfsctl support no
smartcard support no
libusb no
usb net redir no
OpenGL support no
OpenGL dmabufs no
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info yes
QGA MSI support no
seccomp support no
coroutine backend win32
coroutine pool yes
debug stack usage no
GlusterFS support no
Archipelago support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support yes
TPM passthrough no
QOM debugging yes
lzo support no
snappy support no
bzip2 support yes
NUMA host support no
tcmalloc support no
jemalloc support no
avx2 optimization yes
replication support yes
mkdir -p dtc/libfdt
GEN x86_64-softmmu/config-devices.mak.tmp
mkdir -p dtc/tests
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qmp-commands.h
GEN qemu-options.def
GEN qapi-types.h
GEN qapi-visit.h
GEN qapi-event.h
GEN qmp-introspect.h
GEN module_block.h
GEN tests/test-qapi-types.h
GEN tests/test-qapi-visit.h
GEN tests/test-qmp-commands.h
GEN tests/test-qapi-event.h
GEN tests/test-qmp-introspect.h
GEN x86_64-softmmu/config-devices.mak
GEN trace/generated-tracers.h
GEN aarch64-softmmu/config-devices.mak
GEN trace/generated-tcg-tracers.h
DEP /tmp/qemu-test/src/dtc/tests/dumptrees.c
DEP /tmp/qemu-test/src/dtc/tests/trees.S
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
DEP /tmp/qemu-test/src/dtc/tests/testutils.c
DEP /tmp/qemu-test/src/dtc/tests/value-labels.c
DEP /tmp/qemu-test/src/dtc/tests/asm_tree_dump.c
GEN config-all-devices.mak
DEP /tmp/qemu-test/src/dtc/tests/truncated_property.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/integer-expressions.c
DEP /tmp/qemu-test/src/dtc/tests/utilfdt_test.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset_aliases.c
DEP /tmp/qemu-test/src/dtc/tests/add_subnode_with_nops.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_unordered.c
DEP /tmp/qemu-test/src/dtc/tests/dtb_reverse.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_ordered.c
DEP /tmp/qemu-test/src/dtc/tests/extra-terminating-null.c
DEP /tmp/qemu-test/src/dtc/tests/incbin.c
DEP /tmp/qemu-test/src/dtc/tests/boot-cpuid.c
DEP /tmp/qemu-test/src/dtc/tests/phandle_format.c
DEP /tmp/qemu-test/src/dtc/tests/path-references.c
DEP /tmp/qemu-test/src/dtc/tests/references.c
DEP /tmp/qemu-test/src/dtc/tests/string_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/propname_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop2.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop1.c
DEP /tmp/qemu-test/src/dtc/tests/del_node.c
DEP /tmp/qemu-test/src/dtc/tests/del_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop.c
DEP /tmp/qemu-test/src/dtc/tests/set_name.c
DEP /tmp/qemu-test/src/dtc/tests/rw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/open_pack.c
DEP /tmp/qemu-test/src/dtc/tests/nopulate.c
DEP /tmp/qemu-test/src/dtc/tests/mangle-layout.c
DEP /tmp/qemu-test/src/dtc/tests/move_and_save.c
DEP /tmp/qemu-test/src/dtc/tests/sw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/nop_node.c
DEP /tmp/qemu-test/src/dtc/tests/nop_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop_inplace.c
DEP /tmp/qemu-test/src/dtc/tests/notfound.c
DEP /tmp/qemu-test/src/dtc/tests/sized_cells.c
DEP /tmp/qemu-test/src/dtc/tests/char_literal.c
DEP /tmp/qemu-test/src/dtc/tests/get_alias.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_check_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_prop_value.c
DEP /tmp/qemu-test/src/dtc/tests/parent_offset.c
DEP /tmp/qemu-test/src/dtc/tests/supernode_atdepth_offset.c
DEP /tmp/qemu-test/src/dtc/tests/get_path.c
DEP /tmp/qemu-test/src/dtc/tests/get_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/getprop.c
DEP /tmp/qemu-test/src/dtc/tests/get_name.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_offset.c
DEP /tmp/qemu-test/src/dtc/tests/find_property.c
DEP /tmp/qemu-test/src/dtc/tests/root_node.c
DEP /tmp/qemu-test/src/dtc/tests/get_mem_rsv.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_empty_tree.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_strerror.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_rw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_sw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_wip.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_ro.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt.c
DEP /tmp/qemu-test/src/dtc/util.c
DEP /tmp/qemu-test/src/dtc/fdtput.c
DEP /tmp/qemu-test/src/dtc/fdtget.c
DEP /tmp/qemu-test/src/dtc/fdtdump.c
LEX convert-dtsv0-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/srcpos.c
BISON dtc-parser.tab.c
LEX dtc-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/treesource.c
DEP /tmp/qemu-test/src/dtc/livetree.c
DEP /tmp/qemu-test/src/dtc/fstree.c
DEP /tmp/qemu-test/src/dtc/flattree.c
DEP /tmp/qemu-test/src/dtc/dtc.c
DEP /tmp/qemu-test/src/dtc/data.c
DEP /tmp/qemu-test/src/dtc/checks.c
DEP convert-dtsv0-lexer.lex.c
DEP dtc-parser.tab.c
DEP dtc-lexer.lex.c
CHK version_gen.h
UPD version_gen.h
DEP /tmp/qemu-test/src/dtc/util.c
CC libfdt/fdt.o
CC libfdt/fdt_ro.o
CC libfdt/fdt_wip.o
CC libfdt/fdt_sw.o
CC libfdt/fdt_rw.o
CC libfdt/fdt_strerror.o
CC libfdt/fdt_empty_tree.o
AR libfdt/libfdt.a
x86_64-w64-mingw32-ar: creating libfdt/libfdt.a
a - libfdt/fdt.o
a - libfdt/fdt_ro.o
a - libfdt/fdt_wip.o
a - libfdt/fdt_sw.o
a - libfdt/fdt_rw.o
a - libfdt/fdt_strerror.o
a - libfdt/fdt_empty_tree.o
RC version.lo
RC version.o
GEN qga/qapi-generated/qga-qapi-types.h
GEN qga/qapi-generated/qga-qapi-visit.h
GEN qga/qapi-generated/qga-qmp-commands.h
GEN qga/qapi-generated/qga-qapi-types.c
GEN qga/qapi-generated/qga-qapi-visit.c
GEN qga/qapi-generated/qga-qmp-marshal.c
GEN qmp-introspect.c
GEN qapi-types.c
GEN qapi-visit.c
GEN qapi-event.c
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qmp-registry.o
CC qapi/qmp-dispatch.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qint.o
CC qobject/qstring.o
CC qobject/qdict.o
CC qobject/qlist.o
CC qobject/qfloat.o
CC qobject/qbool.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
GEN trace/generated-tracers.c
CC trace/simple.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/bufferiszero.o
CC util/event_notifier-win32.o
CC util/oslib-win32.o
CC util/qemu-thread-win32.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/uri.o
CC util/notify.o
CC util/qemu-option.o
CC util/qemu-progress.o
CC util/crc32c.o
CC util/hexdump.o
CC util/uuid.o
CC util/getauxval.o
CC util/throttle.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-win32.o
CC util/buffer.o
CC util/timed-average.o
CC util/base64.o
CC util/log.o
CC util/qht.o
CC util/qdist.o
CC util/range.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/fdset-add-fd.o
CC stubs/fdset-find-fd.o
CC stubs/fdset-get-fd.o
CC stubs/fdset-remove-fd.o
CC stubs/gdbstub.o
CC stubs/get-fd.o
CC stubs/get-next-serial.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/mon-is-qmp.o
CC stubs/monitor-init.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/replay-user.o
CC stubs/reset.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/fd-register.o
CC stubs/cpus.o
CC stubs/kvm.o
CC stubs/qmp_pc_dimm_device_list.o
CC stubs/target-monitor-defs.o
CC stubs/target-get-monitor-def.o
CC stubs/vhost.o
CC stubs/iohandler.o
CC stubs/smbios_type_38.o
CC stubs/ipmi.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/migration-colo.o
GEN qemu-img-cmds.h
CC async.o
CC thread-pool.o
CC block.o
CC blockjob.o
CC main-loop.o
CC iohandler.o
CC qemu-timer.o
CC aio-win32.o
CC qemu-io-cmds.o
CC replication.o
CC block/raw_bsd.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qed.o
CC block/qed-gencb.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blkreplay.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/raw-win32.o
CC block/win32-aio.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/throttle-groups.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/crypto.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC block/curl.o
CC block/ssh.o
CC block/dmg-bz2.o
CC crypto/init.o
CC crypto/hash.o
CC crypto/hash-nettle.o
CC crypto/aes.o
CC crypto/desrfb.o
CC crypto/cipher.o
CC crypto/tlscreds.o
CC crypto/tlscredsanon.o
CC crypto/tlscredsx509.o
CC crypto/tlssession.o
CC crypto/secret.o
CC crypto/random-gnutls.o
CC crypto/pbkdf.o
CC crypto/pbkdf-nettle.o
CC crypto/ivgen.o
CC crypto/ivgen-essiv.o
CC crypto/ivgen-plain.o
CC crypto/ivgen-plain64.o
CC crypto/afsplit.o
CC crypto/xts.o
CC crypto/block.o
CC crypto/block-qcow.o
CC crypto/block-luks.o
CC io/channel.o
CC io/channel-buffer.o
CC io/channel-command.o
CC io/channel-file.o
CC io/channel-socket.o
CC io/channel-tls.o
CC io/channel-watch.o
CC io/channel-websock.o
CC io/channel-util.o
CC io/task.o
CC qom/object.o
CC qom/container.o
CC qom/qom-qobject.o
CC qom/object_interfaces.o
CC qemu-io.o
CC blockdev.o
CC blockdev-nbd.o
CC iothread.o
CC qdev-monitor.o
CC device-hotplug.o
CC os-win32.o
CC qemu-char.o
CC page_cache.o
CC accel.o
CC bt-host.o
CC bt-vhci.o
CC dma-helpers.o
CC vl.o
CC tpm.o
CC device_tree.o
GEN qmp-marshal.c
CC qmp.o
CC hmp.o
CC cpus-common.o
CC audio/audio.o
CC audio/noaudio.o
CC audio/wavaudio.o
CC audio/sdlaudio.o
CC audio/mixeng.o
CC audio/dsoundaudio.o
CC audio/audio_win_int.o
CC audio/wavcapture.o
CC backends/rng.o
CC backends/rng-egd.o
CC backends/msmouse.o
CC backends/testdev.o
CC backends/tpm.o
CC backends/hostmem.o
CC backends/hostmem-ram.o
CC backends/cryptodev.o
CC backends/cryptodev-builtin.o
CC block/stream.o
CC disas/arm.o
CXX disas/arm-a64.o
CC disas/i386.o
CXX disas/libvixl/vixl/utils.o
CXX disas/libvixl/vixl/compiler-intrinsics.o
CXX disas/libvixl/vixl/a64/instructions-a64.o
CXX disas/libvixl/vixl/a64/decoder-a64.o
CXX disas/libvixl/vixl/a64/disasm-a64.o
CC hw/acpi/core.o
CC hw/acpi/piix4.o
CC hw/acpi/pcihp.o
CC hw/acpi/ich9.o
CC hw/acpi/tco.o
CC hw/acpi/cpu_hotplug.o
CC hw/acpi/memory_hotplug.o
CC hw/acpi/memory_hotplug_acpi_table.o
CC hw/acpi/cpu.o
CC hw/acpi/nvdimm.o
CC hw/acpi/acpi_interface.o
CC hw/acpi/bios-linker-loader.o
CC hw/acpi/aml-build.o
CC hw/acpi/ipmi.o
CC hw/audio/sb16.o
CC hw/audio/es1370.o
CC hw/audio/ac97.o
CC hw/audio/fmopl.o
CC hw/audio/adlib.o
CC hw/audio/gus.o
CC hw/audio/gusemu_hal.o
CC hw/audio/gusemu_mixer.o
CC hw/audio/cs4231a.o
CC hw/audio/intel-hda.o
CC hw/audio/hda-codec.o
CC hw/audio/pcspk.o
CC hw/audio/wm8750.o
CC hw/audio/pl041.o
CC hw/audio/lm4549.o
CC hw/audio/marvell_88w8618.o
CC hw/block/block.o
CC hw/block/cdrom.o
CC hw/block/hd-geometry.o
CC hw/block/fdc.o
CC hw/block/m25p80.o
CC hw/block/nand.o
CC hw/block/pflash_cfi01.o
CC hw/block/pflash_cfi02.o
CC hw/block/ecc.o
CC hw/block/onenand.o
CC hw/block/nvme.o
CC hw/bt/core.o
CC hw/bt/l2cap.o
CC hw/bt/sdp.o
CC hw/bt/hci.o
CC hw/bt/hid.o
CC hw/bt/hci-csr.o
CC hw/char/ipoctal232.o
CC hw/char/parallel.o
CC hw/char/pl011.o
CC hw/char/serial.o
CC hw/char/serial-isa.o
CC hw/char/serial-pci.o
CC hw/char/virtio-console.o
CC hw/char/cadence_uart.o
CC hw/char/debugcon.o
CC hw/char/imx_serial.o
CC hw/core/qdev.o
CC hw/core/qdev-properties.o
CC hw/core/bus.o
CC hw/core/fw-path-provider.o
CC hw/core/irq.o
CC hw/core/hotplug.o
CC hw/core/ptimer.o
CC hw/core/sysbus.o
CC hw/core/machine.o
CC hw/core/null-machine.o
CC hw/core/loader.o
CC hw/core/qdev-properties-system.o
CC hw/core/register.o
CC hw/core/or-irq.o
CC hw/core/platform-bus.o
CC hw/display/ads7846.o
CC hw/display/cirrus_vga.o
CC hw/display/pl110.o
CC hw/display/ssd0303.o
CC hw/display/ssd0323.o
CC hw/display/vga-pci.o
CC hw/display/vga-isa.o
CC hw/display/vmware_vga.o
CC hw/display/blizzard.o
CC hw/display/exynos4210_fimd.o
CC hw/display/framebuffer.o
CC hw/display/tc6393xb.o
CC hw/dma/pl080.o
CC hw/dma/pl330.o
CC hw/dma/i8257.o
CC hw/dma/xlnx-zynq-devcfg.o
CC hw/gpio/max7310.o
CC hw/gpio/pl061.o
CC hw/gpio/zaurus.o
CC hw/gpio/gpio_key.o
CC hw/i2c/core.o
CC hw/i2c/smbus.o
CC hw/i2c/smbus_eeprom.o
CC hw/i2c/i2c-ddc.o
CC hw/i2c/versatile_i2c.o
CC hw/i2c/smbus_ich9.o
CC hw/i2c/pm_smbus.o
CC hw/i2c/bitbang_i2c.o
CC hw/i2c/exynos4210_i2c.o
CC hw/i2c/imx_i2c.o
CC hw/i2c/aspeed_i2c.o
CC hw/ide/core.o
CC hw/ide/atapi.o
CC hw/ide/qdev.o
CC hw/ide/pci.o
CC hw/ide/isa.o
CC hw/ide/piix.o
CC hw/ide/microdrive.o
CC hw/ide/ahci.o
CC hw/ide/ich.o
CC hw/input/hid.o
CC hw/input/lm832x.o
CC hw/input/pckbd.o
CC hw/input/pl050.o
CC hw/input/ps2.o
CC hw/input/stellaris_input.o
CC hw/input/tsc2005.o
CC hw/input/vmmouse.o
CC hw/input/virtio-input.o
CC hw/input/virtio-input-hid.o
CC hw/intc/i8259_common.o
CC hw/intc/i8259.o
CC hw/intc/pl190.o
CC hw/intc/imx_avic.o
CC hw/intc/realview_gic.o
CC hw/intc/ioapic_common.o
CC hw/intc/arm_gic_common.o
CC hw/intc/arm_gic.o
CC hw/intc/arm_gicv2m.o
CC hw/intc/arm_gicv3_common.o
CC hw/intc/arm_gicv3.o
CC hw/intc/arm_gicv3_dist.o
CC hw/intc/arm_gicv3_redist.o
CC hw/intc/arm_gicv3_its_common.o
CC hw/intc/intc.o
CC hw/ipack/ipack.o
CC hw/ipack/tpci200.o
CC hw/ipmi/ipmi.o
CC hw/ipmi/ipmi_bmc_sim.o
CC hw/ipmi/ipmi_bmc_extern.o
CC hw/ipmi/isa_ipmi_kcs.o
CC hw/ipmi/isa_ipmi_bt.o
CC hw/isa/isa-bus.o
CC hw/isa/apm.o
CC hw/mem/pc-dimm.o
CC hw/mem/nvdimm.o
CC hw/misc/applesmc.o
CC hw/misc/max111x.o
CC hw/misc/tmp105.o
CC hw/misc/debugexit.o
CC hw/misc/sga.o
CC hw/misc/pc-testdev.o
CC hw/misc/pci-testdev.o
CC hw/misc/arm_l2x0.o
CC hw/misc/arm_integrator_debug.o
CC hw/misc/a9scu.o
CC hw/misc/arm11scu.o
CC hw/net/ne2000.o
CC hw/net/eepro100.o
CC hw/net/pcnet-pci.o
CC hw/net/pcnet.o
CC hw/net/e1000.o
CC hw/net/e1000x_common.o
CC hw/net/net_tx_pkt.o
CC hw/net/net_rx_pkt.o
CC hw/net/e1000e.o
CC hw/net/e1000e_core.o
CC hw/net/rtl8139.o
CC hw/net/vmxnet3.o
CC hw/net/smc91c111.o
CC hw/net/lan9118.o
CC hw/net/ne2000-isa.o
CC hw/net/xgmac.o
CC hw/net/allwinner_emac.o
CC hw/net/imx_fec.o
CC hw/net/cadence_gem.o
CC hw/net/stellaris_enet.o
CC hw/net/rocker/rocker.o
CC hw/net/rocker/rocker_fp.o
CC hw/net/rocker/rocker_desc.o
CC hw/net/rocker/rocker_world.o
CC hw/net/rocker/rocker_of_dpa.o
CC hw/nvram/eeprom93xx.o
CC hw/nvram/fw_cfg.o
CC hw/nvram/chrp_nvram.o
CC hw/pci-bridge/pci_bridge_dev.o
CC hw/pci-bridge/pci_expander_bridge.o
CC hw/pci-bridge/xio3130_upstream.o
CC hw/pci-bridge/xio3130_downstream.o
CC hw/pci-bridge/ioh3420.o
CC hw/pci-bridge/i82801b11.o
CC hw/pci-host/pam.o
CC hw/pci-host/versatile.o
CC hw/pci-host/piix.o
CC hw/pci-host/q35.o
CC hw/pci-host/gpex.o
CC hw/pci/pci.o
CC hw/pci/pci_bridge.o
CC hw/pci/msix.o
CC hw/pci/msi.o
CC hw/pci/shpc.o
CC hw/pci/slotid_cap.o
CC hw/pci/pci_host.o
CC hw/pci/pcie_host.o
CC hw/pci/pcie.o
CC hw/pci/pcie_aer.o
CC hw/pci/pcie_port.o
CC hw/pci/pci-stub.o
CC hw/pcmcia/pcmcia.o
CC hw/scsi/scsi-disk.o
CC hw/scsi/scsi-generic.o
CC hw/scsi/scsi-bus.o
CC hw/scsi/lsi53c895a.o
CC hw/scsi/mptsas.o
CC hw/scsi/mptconfig.o
CC hw/scsi/mptendian.o
CC hw/scsi/megasas.o
CC hw/scsi/vmw_pvscsi.o
CC hw/scsi/esp.o
CC hw/scsi/esp-pci.o
CC hw/sd/pl181.o
CC hw/sd/ssi-sd.o
CC hw/sd/sd.o
CC hw/sd/core.o
CC hw/sd/sdhci.o
CC hw/smbios/smbios.o
CC hw/smbios/smbios_type_38.o
CC hw/ssi/pl022.o
CC hw/ssi/ssi.o
CC hw/ssi/xilinx_spips.o
CC hw/ssi/aspeed_smc.o
CC hw/ssi/stm32f2xx_spi.o
CC hw/timer/arm_timer.o
CC hw/timer/arm_mptimer.o
CC hw/timer/a9gtimer.o
CC hw/timer/cadence_ttc.o
CC hw/timer/ds1338.o
CC hw/timer/hpet.o
CC hw/timer/i8254_common.o
CC hw/timer/i8254.o
CC hw/timer/pl031.o
CC hw/timer/twl92230.o
CC hw/timer/imx_epit.o
CC hw/timer/imx_gpt.o
CC hw/timer/stm32f2xx_timer.o
CC hw/timer/aspeed_timer.o
CC hw/tpm/tpm_tis.o
CC hw/usb/core.o
CC hw/usb/combined-packet.o
CC hw/usb/bus.o
CC hw/usb/libhw.o
CC hw/usb/desc.o
CC hw/usb/desc-msos.o
CC hw/usb/hcd-uhci.o
CC hw/usb/hcd-ohci.o
CC hw/usb/hcd-ehci.o
CC hw/usb/hcd-ehci-pci.o
CC hw/usb/hcd-ehci-sysbus.o
CC hw/usb/hcd-xhci.o
CC hw/usb/hcd-musb.o
CC hw/usb/dev-hub.o
CC hw/usb/dev-hid.o
CC hw/usb/dev-wacom.o
CC hw/usb/dev-storage.o
CC hw/usb/dev-uas.o
CC hw/usb/dev-audio.o
CC hw/usb/dev-serial.o
CC hw/usb/dev-network.o
CC hw/usb/dev-bluetooth.o
CC hw/usb/dev-smartcard-reader.o
CC hw/usb/host-stub.o
CC hw/virtio/virtio-rng.o
CC hw/virtio/virtio-pci.o
CC hw/virtio/virtio-bus.o
CC hw/virtio/virtio-mmio.o
CC hw/watchdog/watchdog.o
CC hw/watchdog/wdt_i6300esb.o
CC hw/watchdog/wdt_ib700.o
CC migration/migration.o
CC migration/socket.o
CC migration/fd.o
CC migration/exec.o
CC migration/tls.o
CC migration/colo-comm.o
CC migration/colo.o
CC migration/colo-failover.o
CC migration/vmstate.o
CC migration/qemu-file.o
CC migration/qemu-file-channel.o
CC migration/xbzrle.o
CC migration/postcopy-ram.o
CC migration/qjson.o
CC migration/block.o
CC net/net.o
CC net/queue.o
CC net/checksum.o
CC net/util.o
CC net/hub.o
CC net/socket.o
CC net/dump.o
CC net/eth.o
CC net/tap-win32.o
CC net/slirp.o
CC net/filter.o
CC net/filter-buffer.o
CC net/filter-mirror.o
CC net/colo-compare.o
CC net/colo.o
CC net/filter-rewriter.o
CC qom/cpu.o
CC replay/replay.o
CC replay/replay-internal.o
CC replay/replay-events.o
CC replay/replay-time.o
CC replay/replay-input.o
CC replay/replay-char.o
CC replay/replay-snapshot.o
CC slirp/cksum.o
CC slirp/if.o
CC slirp/ip_icmp.o
CC slirp/ip6_icmp.o
CC slirp/ip6_input.o
CC slirp/ip6_output.o
CC slirp/ip_input.o
CC slirp/ip_output.o
CC slirp/dnssearch.o
CC slirp/dhcpv6.o
CC slirp/slirp.o
CC slirp/mbuf.o
CC slirp/misc.o
CC slirp/sbuf.o
CC slirp/socket.o
CC slirp/tcp_input.o
CC slirp/tcp_output.o
CC slirp/tcp_subr.o
CC slirp/tcp_timer.o
CC slirp/udp.o
CC slirp/udp6.o
CC slirp/bootp.o
CC slirp/tftp.o
CC slirp/arp_table.o
CC slirp/ndp_table.o
CC ui/keymaps.o
CC ui/console.o
CC ui/cursor.o
CC ui/qemu-pixman.o
CC ui/input.o
CC ui/input-keymap.o
CC ui/input-legacy.o
CC ui/sdl.o
CC ui/sdl_zoom.o
CC ui/x_keymap.o
CC ui/vnc.o
CC ui/vnc-enc-zlib.o
CC ui/vnc-enc-hextile.o
CC ui/vnc-enc-tight.o
CC ui/vnc-palette.o
CC ui/vnc-enc-zrle.o
CC ui/vnc-auth-vencrypt.o
CC ui/vnc-ws.o
CC ui/vnc-jobs.o
CC ui/gtk.o
AS optionrom/multiboot.o
AS optionrom/linuxboot.o
CC optionrom/linuxboot_dma.o
AS optionrom/kvmvapic.o
BUILD optionrom/multiboot.img
BUILD optionrom/linuxboot.img
BUILD optionrom/linuxboot_dma.img
BUILD optionrom/kvmvapic.img
BUILD optionrom/multiboot.raw
BUILD optionrom/linuxboot.raw
BUILD optionrom/linuxboot_dma.raw
BUILD optionrom/kvmvapic.raw
SIGN optionrom/multiboot.bin
SIGN optionrom/linuxboot.bin
SIGN optionrom/linuxboot_dma.bin
SIGN optionrom/kvmvapic.bin
CC qga/commands.o
CC qga/guest-agent-command-state.o
CC qga/main.o
CC qga/commands-win32.o
CC qga/channel-win32.o
CC qga/service-win32.o
CC qga/vss-win32.o
CC qga/qapi-generated/qga-qapi-types.o
CC qga/qapi-generated/qga-qapi-visit.o
CC qga/qapi-generated/qga-qmp-marshal.o
CC qmp-introspect.o
CC qapi-types.o
CC qapi-visit.o
CC qapi-event.o
AR libqemustub.a
CC qemu-img.o
CC qmp-marshal.o
CC trace/generated-tracers.o
AR libqemuutil.a
LINK qemu-ga.exe
LINK qemu-io.exe
LINK qemu-img.exe
GEN x86_64-softmmu/hmp-commands.h
GEN x86_64-softmmu/hmp-commands-info.h
GEN x86_64-softmmu/config-target.h
GEN aarch64-softmmu/hmp-commands-info.h
GEN aarch64-softmmu/hmp-commands.h
GEN aarch64-softmmu/config-target.h
CC x86_64-softmmu/exec.o
CC x86_64-softmmu/cpu-exec.o
CC x86_64-softmmu/translate-all.o
CC x86_64-softmmu/translate-common.o
CC x86_64-softmmu/cpu-exec-common.o
CC x86_64-softmmu/tcg/tcg-op.o
CC x86_64-softmmu/tcg/tcg.o
CC x86_64-softmmu/tcg/optimize.o
CC x86_64-softmmu/tcg/tcg-common.o
CC x86_64-softmmu/fpu/softfloat.o
CC x86_64-softmmu/disas.o
CC x86_64-softmmu/tcg-runtime.o
CC x86_64-softmmu/kvm-stub.o
CC x86_64-softmmu/arch_init.o
CC x86_64-softmmu/cpus.o
CC x86_64-softmmu/monitor.o
CC x86_64-softmmu/gdbstub.o
CC x86_64-softmmu/balloon.o
CC x86_64-softmmu/ioport.o
CC x86_64-softmmu/numa.o
CC x86_64-softmmu/qtest.o
CC x86_64-softmmu/bootdevice.o
CC aarch64-softmmu/exec.o
CC aarch64-softmmu/translate-all.o
CC x86_64-softmmu/memory.o
CC aarch64-softmmu/cpu-exec.o
CC x86_64-softmmu/cputlb.o
CC x86_64-softmmu/memory_mapping.o
CC x86_64-softmmu/dump.o
CC x86_64-softmmu/migration/ram.o
CC aarch64-softmmu/translate-common.o
CC aarch64-softmmu/cpu-exec-common.o
CC x86_64-softmmu/migration/savevm.o
CC aarch64-softmmu/tcg/tcg.o
CC x86_64-softmmu/xen-common-stub.o
CC aarch64-softmmu/tcg/tcg-op.o
CC x86_64-softmmu/xen-hvm-stub.o
CC aarch64-softmmu/tcg/optimize.o
CC aarch64-softmmu/tcg/tcg-common.o
CC aarch64-softmmu/fpu/softfloat.o
CC x86_64-softmmu/hw/block/virtio-blk.o
CC aarch64-softmmu/disas.o
CC x86_64-softmmu/hw/block/dataplane/virtio-blk.o
CC aarch64-softmmu/tcg-runtime.o
GEN aarch64-softmmu/gdbstub-xml.c
CC x86_64-softmmu/hw/char/virtio-serial-bus.o
CC aarch64-softmmu/kvm-stub.o
CC x86_64-softmmu/hw/core/nmi.o
CC aarch64-softmmu/arch_init.o
CC aarch64-softmmu/cpus.o
CC x86_64-softmmu/hw/core/generic-loader.o
CC x86_64-softmmu/hw/cpu/core.o
CC aarch64-softmmu/monitor.o
CC x86_64-softmmu/hw/display/vga.o
CC aarch64-softmmu/gdbstub.o
CC aarch64-softmmu/balloon.o
CC aarch64-softmmu/ioport.o
CC aarch64-softmmu/numa.o
CC x86_64-softmmu/hw/display/virtio-gpu.o
CC aarch64-softmmu/qtest.o
CC x86_64-softmmu/hw/display/virtio-gpu-3d.o
CC aarch64-softmmu/bootdevice.o
CC aarch64-softmmu/memory.o
CC aarch64-softmmu/cputlb.o
CC aarch64-softmmu/memory_mapping.o
CC x86_64-softmmu/hw/display/virtio-gpu-pci.o
CC aarch64-softmmu/dump.o
CC aarch64-softmmu/migration/ram.o
CC x86_64-softmmu/hw/display/virtio-vga.o
CC x86_64-softmmu/hw/intc/apic.o
CC x86_64-softmmu/hw/intc/apic_common.o
CC aarch64-softmmu/migration/savevm.o
CC x86_64-softmmu/hw/intc/ioapic.o
CC aarch64-softmmu/xen-common-stub.o
CC x86_64-softmmu/hw/isa/lpc_ich9.o
CC aarch64-softmmu/xen-hvm-stub.o
CC aarch64-softmmu/hw/adc/stm32f2xx_adc.o
CC aarch64-softmmu/hw/block/virtio-blk.o
CC aarch64-softmmu/hw/block/dataplane/virtio-blk.o
CC aarch64-softmmu/hw/char/exynos4210_uart.o
CC x86_64-softmmu/hw/misc/vmport.o
CC aarch64-softmmu/hw/char/omap_uart.o
CC aarch64-softmmu/hw/char/digic-uart.o
CC x86_64-softmmu/hw/misc/pvpanic.o
CC x86_64-softmmu/hw/misc/edu.o
CC x86_64-softmmu/hw/net/virtio-net.o
CC aarch64-softmmu/hw/char/stm32f2xx_usart.o
CC x86_64-softmmu/hw/net/vhost_net.o
CC aarch64-softmmu/hw/char/bcm2835_aux.o
CC aarch64-softmmu/hw/char/virtio-serial-bus.o
CC x86_64-softmmu/hw/scsi/virtio-scsi.o
CC aarch64-softmmu/hw/core/nmi.o
CC aarch64-softmmu/hw/core/generic-loader.o
CC x86_64-softmmu/hw/scsi/virtio-scsi-dataplane.o
CC aarch64-softmmu/hw/cpu/arm11mpcore.o
CC x86_64-softmmu/hw/timer/mc146818rtc.o
CC x86_64-softmmu/hw/virtio/virtio.o
CC aarch64-softmmu/hw/cpu/realview_mpcore.o
CC x86_64-softmmu/hw/virtio/virtio-balloon.o
CC x86_64-softmmu/hw/virtio/virtio-crypto.o
CC aarch64-softmmu/hw/cpu/a9mpcore.o
CC x86_64-softmmu/hw/virtio/virtio-crypto-pci.o
CC x86_64-softmmu/hw/i386/multiboot.o
CC x86_64-softmmu/hw/i386/pc.o
CC aarch64-softmmu/hw/cpu/a15mpcore.o
CC x86_64-softmmu/hw/i386/pc_piix.o
CC aarch64-softmmu/hw/cpu/core.o
CC x86_64-softmmu/hw/i386/pc_q35.o
CC aarch64-softmmu/hw/display/omap_dss.o
CC x86_64-softmmu/hw/i386/pc_sysfw.o
CC aarch64-softmmu/hw/display/omap_lcdc.o
CC x86_64-softmmu/hw/i386/x86-iommu.o
CC x86_64-softmmu/hw/i386/intel_iommu.o
CC x86_64-softmmu/hw/i386/amd_iommu.o
CC aarch64-softmmu/hw/display/pxa2xx_lcd.o
CC aarch64-softmmu/hw/display/bcm2835_fb.o
CC aarch64-softmmu/hw/display/vga.o
CC x86_64-softmmu/hw/i386/kvmvapic.o
CC x86_64-softmmu/hw/i386/acpi-build.o
CC aarch64-softmmu/hw/display/virtio-gpu.o
CC x86_64-softmmu/hw/i386/pci-assign-load-rom.o
CC aarch64-softmmu/hw/display/virtio-gpu-3d.o
CC aarch64-softmmu/hw/display/virtio-gpu-pci.o
CC aarch64-softmmu/hw/display/dpcd.o
CC aarch64-softmmu/hw/display/xlnx_dp.o
CC aarch64-softmmu/hw/dma/xlnx_dpdma.o
CC x86_64-softmmu/target-i386/translate.o
CC aarch64-softmmu/hw/dma/omap_dma.o
CC x86_64-softmmu/target-i386/helper.o
CC aarch64-softmmu/hw/dma/soc_dma.o
CC aarch64-softmmu/hw/dma/pxa2xx_dma.o
CC x86_64-softmmu/target-i386/cpu.o
CC x86_64-softmmu/target-i386/bpt_helper.o
CC aarch64-softmmu/hw/dma/bcm2835_dma.o
CC aarch64-softmmu/hw/gpio/omap_gpio.o
CC x86_64-softmmu/target-i386/excp_helper.o
CC x86_64-softmmu/target-i386/fpu_helper.o
CC aarch64-softmmu/hw/gpio/imx_gpio.o
CC x86_64-softmmu/target-i386/cc_helper.o
CC x86_64-softmmu/target-i386/int_helper.o
CC aarch64-softmmu/hw/i2c/omap_i2c.o
CC x86_64-softmmu/target-i386/svm_helper.o
CC x86_64-softmmu/target-i386/smm_helper.o
CC aarch64-softmmu/hw/input/pxa2xx_keypad.o
CC aarch64-softmmu/hw/input/tsc210x.o
CC aarch64-softmmu/hw/intc/armv7m_nvic.o
CC aarch64-softmmu/hw/intc/exynos4210_gic.o
CC x86_64-softmmu/target-i386/misc_helper.o
CC x86_64-softmmu/target-i386/mem_helper.o
CC aarch64-softmmu/hw/intc/exynos4210_combiner.o
CC x86_64-softmmu/target-i386/seg_helper.o
CC aarch64-softmmu/hw/intc/omap_intc.o
CC aarch64-softmmu/hw/intc/bcm2835_ic.o
CC aarch64-softmmu/hw/intc/bcm2836_control.o
CC x86_64-softmmu/target-i386/mpx_helper.o
CC x86_64-softmmu/target-i386/gdbstub.o
CC aarch64-softmmu/hw/intc/allwinner-a10-pic.o
CC aarch64-softmmu/hw/intc/aspeed_vic.o
CC x86_64-softmmu/target-i386/machine.o
CC aarch64-softmmu/hw/intc/arm_gicv3_cpuif.o
CC x86_64-softmmu/target-i386/arch_memory_mapping.o
CC x86_64-softmmu/target-i386/arch_dump.o
CC x86_64-softmmu/target-i386/monitor.o
CC aarch64-softmmu/hw/misc/arm_sysctl.o
CC x86_64-softmmu/target-i386/kvm-stub.o
CC aarch64-softmmu/hw/misc/cbus.o
CC aarch64-softmmu/hw/misc/exynos4210_pmu.o
GEN trace/generated-helpers.c
CC x86_64-softmmu/trace/control-target.o
CC aarch64-softmmu/hw/misc/imx_ccm.o
CC aarch64-softmmu/hw/misc/imx31_ccm.o
CC aarch64-softmmu/hw/misc/imx6_ccm.o
CC aarch64-softmmu/hw/misc/imx25_ccm.o
CC aarch64-softmmu/hw/misc/imx6_src.o
CC aarch64-softmmu/hw/misc/mst_fpga.o
CC aarch64-softmmu/hw/misc/omap_gpmc.o
CC aarch64-softmmu/hw/misc/omap_clk.o
CC x86_64-softmmu/trace/generated-helpers.o
CC aarch64-softmmu/hw/misc/omap_l4.o
CC aarch64-softmmu/hw/misc/omap_sdrc.o
CC aarch64-softmmu/hw/misc/omap_tap.o
CC aarch64-softmmu/hw/misc/bcm2835_mbox.o
CC aarch64-softmmu/hw/misc/bcm2835_property.o
CC aarch64-softmmu/hw/misc/zynq_slcr.o
CC aarch64-softmmu/hw/misc/zynq-xadc.o
CC aarch64-softmmu/hw/misc/stm32f2xx_syscfg.o
CC aarch64-softmmu/hw/misc/edu.o
CC aarch64-softmmu/hw/misc/auxbus.o
CC aarch64-softmmu/hw/misc/aspeed_scu.o
CC aarch64-softmmu/hw/misc/aspeed_sdmc.o
CC aarch64-softmmu/hw/net/virtio-net.o
CC aarch64-softmmu/hw/net/vhost_net.o
CC aarch64-softmmu/hw/pcmcia/pxa2xx.o
CC aarch64-softmmu/hw/scsi/virtio-scsi.o
CC aarch64-softmmu/hw/scsi/virtio-scsi-dataplane.o
CC aarch64-softmmu/hw/sd/omap_mmc.o
CC aarch64-softmmu/hw/sd/pxa2xx_mmci.o
CC aarch64-softmmu/hw/ssi/omap_spi.o
CC aarch64-softmmu/hw/ssi/imx_spi.o
CC aarch64-softmmu/hw/timer/exynos4210_mct.o
CC aarch64-softmmu/hw/timer/exynos4210_pwm.o
CC aarch64-softmmu/hw/timer/exynos4210_rtc.o
CC aarch64-softmmu/hw/timer/omap_gptimer.o
CC aarch64-softmmu/hw/timer/omap_synctimer.o
LINK x86_64-softmmu/qemu-system-x86_64w.exe
CC aarch64-softmmu/hw/timer/pxa2xx_timer.o
CC aarch64-softmmu/hw/timer/digic-timer.o
CC aarch64-softmmu/hw/timer/allwinner-a10-pit.o
CC aarch64-softmmu/hw/usb/tusb6010.o
CC aarch64-softmmu/hw/virtio/virtio.o
CC aarch64-softmmu/hw/virtio/virtio-balloon.o
CC aarch64-softmmu/hw/virtio/virtio-crypto.o
CC aarch64-softmmu/hw/virtio/virtio-crypto-pci.o
CC aarch64-softmmu/hw/arm/boot.o
CC aarch64-softmmu/hw/arm/collie.o
CC aarch64-softmmu/hw/arm/exynos4_boards.o
CC aarch64-softmmu/hw/arm/gumstix.o
CC aarch64-softmmu/hw/arm/highbank.o
CC aarch64-softmmu/hw/arm/digic_boards.o
CC aarch64-softmmu/hw/arm/integratorcp.o
CC aarch64-softmmu/hw/arm/mainstone.o
hw/virtio/virtio.o: In function `virtio_queue_aio_set_host_notifier_handler':
/tmp/qemu-test/src/hw/virtio/virtio.c:2048: undefined reference to `aio_set_poll_handler'
/tmp/qemu-test/src/hw/virtio/virtio.c:2056: undefined reference to `aio_set_poll_handler'
collect2: error: ld returned 1 exit status
Makefile:198: recipe for target 'qemu-system-x86_64w.exe' failed
make[1]: *** [qemu-system-x86_64w.exe] Error 1
Makefile:202: recipe for target 'subdir-x86_64-softmmu' failed
make: *** [subdir-x86_64-softmmu] Error 2
make: *** Waiting for unfinished jobs....
CC aarch64-softmmu/hw/arm/musicpal.o
CC aarch64-softmmu/hw/arm/nseries.o
CC aarch64-softmmu/hw/arm/omap_sx1.o
CC aarch64-softmmu/hw/arm/palm.o
CC aarch64-softmmu/hw/arm/spitz.o
CC aarch64-softmmu/hw/arm/realview.o
CC aarch64-softmmu/hw/arm/stellaris.o
CC aarch64-softmmu/hw/arm/tosa.o
CC aarch64-softmmu/hw/arm/versatilepb.o
CC aarch64-softmmu/hw/arm/vexpress.o
CC aarch64-softmmu/hw/arm/virt.o
CC aarch64-softmmu/hw/arm/xilinx_zynq.o
CC aarch64-softmmu/hw/arm/z2.o
CC aarch64-softmmu/hw/arm/virt-acpi-build.o
CC aarch64-softmmu/hw/arm/netduino2.o
CC aarch64-softmmu/hw/arm/sysbus-fdt.o
CC aarch64-softmmu/hw/arm/armv7m.o
CC aarch64-softmmu/hw/arm/exynos4210.o
CC aarch64-softmmu/hw/arm/pxa2xx.o
CC aarch64-softmmu/hw/arm/pxa2xx_gpio.o
CC aarch64-softmmu/hw/arm/pxa2xx_pic.o
CC aarch64-softmmu/hw/arm/digic.o
CC aarch64-softmmu/hw/arm/omap1.o
CC aarch64-softmmu/hw/arm/omap2.o
CC aarch64-softmmu/hw/arm/strongarm.o
CC aarch64-softmmu/hw/arm/allwinner-a10.o
CC aarch64-softmmu/hw/arm/cubieboard.o
CC aarch64-softmmu/hw/arm/bcm2835_peripherals.o
CC aarch64-softmmu/hw/arm/bcm2836.o
CC aarch64-softmmu/hw/arm/raspi.o
CC aarch64-softmmu/hw/arm/stm32f205_soc.o
CC aarch64-softmmu/hw/arm/xlnx-zynqmp.o
CC aarch64-softmmu/hw/arm/xlnx-ep108.o
CC aarch64-softmmu/hw/arm/fsl-imx25.o
CC aarch64-softmmu/hw/arm/imx25_pdk.o
CC aarch64-softmmu/hw/arm/fsl-imx31.o
CC aarch64-softmmu/hw/arm/kzm.o
CC aarch64-softmmu/hw/arm/fsl-imx6.o
CC aarch64-softmmu/hw/arm/sabrelite.o
CC aarch64-softmmu/hw/arm/aspeed_soc.o
CC aarch64-softmmu/hw/arm/aspeed.o
CC aarch64-softmmu/target-arm/arm-semi.o
CC aarch64-softmmu/target-arm/machine.o
CC aarch64-softmmu/target-arm/psci.o
CC aarch64-softmmu/target-arm/arch_dump.o
CC aarch64-softmmu/target-arm/monitor.o
CC aarch64-softmmu/target-arm/kvm-stub.o
CC aarch64-softmmu/target-arm/translate.o
CC aarch64-softmmu/target-arm/op_helper.o
CC aarch64-softmmu/target-arm/helper.o
CC aarch64-softmmu/target-arm/cpu.o
CC aarch64-softmmu/target-arm/neon_helper.o
CC aarch64-softmmu/target-arm/iwmmxt_helper.o
CC aarch64-softmmu/target-arm/gdbstub.o
CC aarch64-softmmu/target-arm/cpu64.o
CC aarch64-softmmu/target-arm/translate-a64.o
CC aarch64-softmmu/target-arm/helper-a64.o
CC aarch64-softmmu/target-arm/gdbstub64.o
CC aarch64-softmmu/target-arm/crypto_helper.o
CC aarch64-softmmu/target-arm/arm-powerctl.o
GEN trace/generated-helpers.c
CC aarch64-softmmu/trace/control-target.o
CC aarch64-softmmu/gdbstub-xml.o
CC aarch64-softmmu/trace/generated-helpers.o
LINK aarch64-softmmu/qemu-system-aarch64w.exe
hw/virtio/virtio.o: In function `virtio_queue_aio_set_host_notifier_handler':
/tmp/qemu-test/src/hw/virtio/virtio.c:2048: undefined reference to `aio_set_poll_handler'
/tmp/qemu-test/src/hw/virtio/virtio.c:2056: undefined reference to `aio_set_poll_handler'
collect2: error: ld returned 1 exit status
Makefile:198: recipe for target 'qemu-system-aarch64w.exe' failed
make[1]: *** [qemu-system-aarch64w.exe] Error 1
Makefile:202: recipe for target 'subdir-aarch64-softmmu' failed
make: *** [subdir-aarch64-softmmu] Error 2
make[1]: *** [docker-run] Error 2
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-c68h00s0/src'
make: *** [docker-run-test-mingw@fedora] Error 2
=== OUTPUT END ===
Test command exited with code: 2
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-11 19:59 ` [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Karl Rister
@ 2016-11-14 13:53 ` Fam Zheng
2016-11-14 14:52 ` Karl Rister
2016-11-14 15:26 ` Stefan Hajnoczi
1 sibling, 1 reply; 29+ messages in thread
From: Fam Zheng @ 2016-11-14 13:53 UTC (permalink / raw)
To: Karl Rister; +Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini, Andrew Theurer
On Fri, 11/11 13:59, Karl Rister wrote:
>
> Stefan
>
> I ran some quick tests with your patches and got some pretty good gains,
> but also some seemingly odd behavior.
>
> These results are for a 5 minute test doing sequential 4KB requests from
> fio using O_DIRECT, libaio, and IO depth of 1. The requests are
> performed directly against the virtio-blk device (no filesystem) which
> is backed by a 400GB NVme card.
>
> QEMU_AIO_POLL_MAX_NS IOPs
> unset 31,383
> 1 46,860
> 2 46,440
> 4 35,246
> 8 34,973
> 16 46,794
> 32 46,729
> 64 35,520
> 128 45,902
For sequential read with ioq=1, each request takes >20000ns under 45,000 IOPs.
Isn't a poll time of 128ns a mismatching order of magnitude? Have you tried
larger values? Not criticizing, just trying to understand how it workd.
Also, do you happen to have numbers for unpatched QEMU (just to confirm that
"unset" case doesn't cause regression) and baremetal for comparison?
Fam
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
` (4 preceding siblings ...)
2016-11-13 6:20 ` no-reply
@ 2016-11-14 14:51 ` Christian Borntraeger
2016-11-14 16:53 ` Stefan Hajnoczi
2016-11-14 14:59 ` Christian Borntraeger
6 siblings, 1 reply; 29+ messages in thread
From: Christian Borntraeger @ 2016-11-14 14:51 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel; +Cc: Paolo Bonzini, Fam Zheng, Karl Rister
On 11/09/2016 06:13 PM, Stefan Hajnoczi wrote:
> Recent performance investigation work done by Karl Rister shows that the
> guest->host notification takes around 20 us. This is more than the "overhead"
> of QEMU itself (e.g. block layer).
>
> One way to avoid the costly exit is to use polling instead of notification.
> The main drawback of polling is that it consumes CPU resources. In order to
> benefit performance the host must have extra CPU cycles available on physical
> CPUs that aren't used by the guest.
>
> This is an experimental AioContext polling implementation. It adds a polling
> callback into the event loop. Polling functions are implemented for virtio-blk
> virtqueue guest->host kick and Linux AIO completion.
>
> The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
> poll before entering the usual blocking poll(2) syscall. Try setting this
> variable to the time from old request completion to new virtqueue kick.
>
> By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
> polling!
>
> Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> values. If you don't find a good value we should double-check the tracing data
> to see if this experimental code can be improved.
>
> Stefan Hajnoczi (3):
> aio-posix: add aio_set_poll_handler()
> virtio: poll virtqueues for new buffers
> linux-aio: poll ring for completions
>
> aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> block/linux-aio.c | 17 +++++++
> hw/virtio/virtio.c | 19 ++++++++
> include/block/aio.h | 16 +++++++
> 4 files changed, 185 insertions(+)
Hmm, I see all affected threads using more CPU power, but the performance numbers are
somewhat inconclusive on s390. I have no proper test setup (only a shared LPAR), but
all numbers are in the same ballpark of 3-5Gbyte/sec for 5 disks for 4k random reads
with iodepth=8.
What I find interesting is that the guest still does a huge amount of exits for the
guest->host notifications. I think if we could combine this with some notification
suppression, then things could be even more interesting.
Christian
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 13:53 ` Fam Zheng
@ 2016-11-14 14:52 ` Karl Rister
2016-11-14 16:56 ` Stefan Hajnoczi
0 siblings, 1 reply; 29+ messages in thread
From: Karl Rister @ 2016-11-14 14:52 UTC (permalink / raw)
To: Fam Zheng; +Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini, Andrew Theurer
On 11/14/2016 07:53 AM, Fam Zheng wrote:
> On Fri, 11/11 13:59, Karl Rister wrote:
>>
>> Stefan
>>
>> I ran some quick tests with your patches and got some pretty good gains,
>> but also some seemingly odd behavior.
>>
>> These results are for a 5 minute test doing sequential 4KB requests from
>> fio using O_DIRECT, libaio, and IO depth of 1. The requests are
>> performed directly against the virtio-blk device (no filesystem) which
>> is backed by a 400GB NVme card.
>>
>> QEMU_AIO_POLL_MAX_NS IOPs
>> unset 31,383
>> 1 46,860
>> 2 46,440
>> 4 35,246
>> 8 34,973
>> 16 46,794
>> 32 46,729
>> 64 35,520
>> 128 45,902
>
> For sequential read with ioq=1, each request takes >20000ns under 45,000 IOPs.
> Isn't a poll time of 128ns a mismatching order of magnitude? Have you tried
> larger values? Not criticizing, just trying to understand how it workd.
Not yet, I was just trying to get something out as quick as I could
(while juggling this with some other stuff...). Frankly I was a bit
surprised that the low values made such an impact and then got
distracted by the behaviors of 4, 8, and 64.
>
> Also, do you happen to have numbers for unpatched QEMU (just to confirm that
> "unset" case doesn't cause regression) and baremetal for comparison?
I didn't run this exact test on the same qemu.git master changeset
unpatched. I did however previously try it against the v2.7.0 tag and
got somewhere around 27.5K IOPs. My original intention was to apply the
patches to v2.7.0 but it wouldn't build.
We have done a lot of testing and tracing on the qemu-rhev package and
27K IOPs is about what we see there (with tracing disabled).
Given the patch discussions I saw I was mainly trying to get a sniff
test out and then do a more complete workup with whatever updates are made.
I should probably note that there are a lot of pinning optimizations
made here to assist in our tracing efforts which also result in improved
performance. Ultimately, in a proper evaluation of these patches most
of that will be removed so the behavior may change somewhat.
>
> Fam
>
--
Karl Rister <krister@redhat.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
` (5 preceding siblings ...)
2016-11-14 14:51 ` Christian Borntraeger
@ 2016-11-14 14:59 ` Christian Borntraeger
2016-11-14 16:52 ` Stefan Hajnoczi
6 siblings, 1 reply; 29+ messages in thread
From: Christian Borntraeger @ 2016-11-14 14:59 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel; +Cc: Paolo Bonzini, Fam Zheng, Karl Rister
On 11/09/2016 06:13 PM, Stefan Hajnoczi wrote:
> Recent performance investigation work done by Karl Rister shows that the
> guest->host notification takes around 20 us. This is more than the "overhead"
> of QEMU itself (e.g. block layer).
>
> One way to avoid the costly exit is to use polling instead of notification.
> The main drawback of polling is that it consumes CPU resources. In order to
> benefit performance the host must have extra CPU cycles available on physical
> CPUs that aren't used by the guest.
>
> This is an experimental AioContext polling implementation. It adds a polling
> callback into the event loop. Polling functions are implemented for virtio-blk
> virtqueue guest->host kick and Linux AIO completion.
>
> The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
> poll before entering the usual blocking poll(2) syscall. Try setting this
> variable to the time from old request completion to new virtqueue kick.
>
> By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
> polling!
>
> Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> values. If you don't find a good value we should double-check the tracing data
> to see if this experimental code can be improved.
>
> Stefan Hajnoczi (3):
> aio-posix: add aio_set_poll_handler()
> virtio: poll virtqueues for new buffers
> linux-aio: poll ring for completions
>
> aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> block/linux-aio.c | 17 +++++++
> hw/virtio/virtio.c | 19 ++++++++
> include/block/aio.h | 16 +++++++
> 4 files changed, 185 insertions(+)
>
Another observation: With more iothreads than host CPUs the performance drops significantly.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-11 19:59 ` [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Karl Rister
2016-11-14 13:53 ` Fam Zheng
@ 2016-11-14 15:26 ` Stefan Hajnoczi
2016-11-14 15:29 ` Paolo Bonzini
` (2 more replies)
1 sibling, 3 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-14 15:26 UTC (permalink / raw)
To: Karl Rister
Cc: Stefan Hajnoczi, qemu-devel, Andrew Theurer, Paolo Bonzini, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 3836 bytes --]
On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
> On 11/09/2016 11:13 AM, Stefan Hajnoczi wrote:
> > Recent performance investigation work done by Karl Rister shows that the
> > guest->host notification takes around 20 us. This is more than the "overhead"
> > of QEMU itself (e.g. block layer).
> >
> > One way to avoid the costly exit is to use polling instead of notification.
> > The main drawback of polling is that it consumes CPU resources. In order to
> > benefit performance the host must have extra CPU cycles available on physical
> > CPUs that aren't used by the guest.
> >
> > This is an experimental AioContext polling implementation. It adds a polling
> > callback into the event loop. Polling functions are implemented for virtio-blk
> > virtqueue guest->host kick and Linux AIO completion.
> >
> > The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
> > poll before entering the usual blocking poll(2) syscall. Try setting this
> > variable to the time from old request completion to new virtqueue kick.
> >
> > By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
> > polling!
> >
> > Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> > values. If you don't find a good value we should double-check the tracing data
> > to see if this experimental code can be improved.
>
> Stefan
>
> I ran some quick tests with your patches and got some pretty good gains,
> but also some seemingly odd behavior.
>
> These results are for a 5 minute test doing sequential 4KB requests from
> fio using O_DIRECT, libaio, and IO depth of 1. The requests are
> performed directly against the virtio-blk device (no filesystem) which
> is backed by a 400GB NVme card.
>
> QEMU_AIO_POLL_MAX_NS IOPs
> unset 31,383
> 1 46,860
> 2 46,440
> 4 35,246
> 8 34,973
> 16 46,794
> 32 46,729
> 64 35,520
> 128 45,902
The environment variable is in nanoseconds. The range of values you
tried are very small (all <1 usec). It would be interesting to try
larger values in the ballpark of the latencies you have traced. For
example 2000, 4000, 8000, 16000, and 32000 ns.
Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
much CPU overhead.
> I found the results for 4, 8, and 64 odd so I re-ran some tests to check
> for consistency. I used values of 2 and 4 and ran each 5 times. Here
> is what I got:
>
> Iteration QEMU_AIO_POLL_MAX_NS=2 QEMU_AIO_POLL_MAX_NS=4
> 1 46,972 35,434
> 2 46,939 35,719
> 3 47,005 35,584
> 4 47,016 35,615
> 5 47,267 35,474
>
> So the results seem consistent.
That is interesting. I don't have an explanation for the consistent
difference between 2 and 4 ns polling time. The time difference is so
small yet the IOPS difference is clear.
Comparing traces could shed light on the cause for this difference.
> I saw some discussion on the patches made which make me think you'll be
> making some changes, is that right? If so, I may wait for the updates
> and then we can run the much more exhaustive set of workloads
> (sequential read and write, random read and write) at various block
> sizes (4, 8, 16, 32, 64, 128, and 256) and multiple IO depths (1 and 32)
> that we were doing when we started looking at this.
I'll send an updated version of the patches.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 15:26 ` Stefan Hajnoczi
@ 2016-11-14 15:29 ` Paolo Bonzini
2016-11-14 17:06 ` Stefan Hajnoczi
2016-11-16 8:27 ` Fam Zheng
2016-11-14 15:36 ` Karl Rister
2016-11-14 20:12 ` Karl Rister
2 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-11-14 15:29 UTC (permalink / raw)
To: Stefan Hajnoczi, Karl Rister
Cc: Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 1079 bytes --]
On 14/11/2016 16:26, Stefan Hajnoczi wrote:
> On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
>> QEMU_AIO_POLL_MAX_NS IOPs
>> unset 31,383
>> 1 46,860
>> 2 46,440
>> 4 35,246
>> 8 34,973
>> 16 46,794
>> 32 46,729
>> 64 35,520
>> 128 45,902
>
> The environment variable is in nanoseconds. The range of values you
> tried are very small (all <1 usec). It would be interesting to try
> larger values in the ballpark of the latencies you have traced. For
> example 2000, 4000, 8000, 16000, and 32000 ns.
>
> Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> much CPU overhead.
That basically means "avoid a syscall if you already know there's
something to do", so in retrospect it's not that surprising. Still
interesting though, and it means that the feature is useful even if you
don't have CPU to waste.
Paolo
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 15:26 ` Stefan Hajnoczi
2016-11-14 15:29 ` Paolo Bonzini
@ 2016-11-14 15:36 ` Karl Rister
2016-11-14 20:12 ` Karl Rister
2 siblings, 0 replies; 29+ messages in thread
From: Karl Rister @ 2016-11-14 15:36 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Stefan Hajnoczi, qemu-devel, Andrew Theurer, Paolo Bonzini, Fam Zheng
On 11/14/2016 09:26 AM, Stefan Hajnoczi wrote:
> On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
>> On 11/09/2016 11:13 AM, Stefan Hajnoczi wrote:
>>> Recent performance investigation work done by Karl Rister shows that the
>>> guest->host notification takes around 20 us. This is more than the "overhead"
>>> of QEMU itself (e.g. block layer).
>>>
>>> One way to avoid the costly exit is to use polling instead of notification.
>>> The main drawback of polling is that it consumes CPU resources. In order to
>>> benefit performance the host must have extra CPU cycles available on physical
>>> CPUs that aren't used by the guest.
>>>
>>> This is an experimental AioContext polling implementation. It adds a polling
>>> callback into the event loop. Polling functions are implemented for virtio-blk
>>> virtqueue guest->host kick and Linux AIO completion.
>>>
>>> The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
>>> poll before entering the usual blocking poll(2) syscall. Try setting this
>>> variable to the time from old request completion to new virtqueue kick.
>>>
>>> By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
>>> polling!
>>>
>>> Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
>>> values. If you don't find a good value we should double-check the tracing data
>>> to see if this experimental code can be improved.
>>
>> Stefan
>>
>> I ran some quick tests with your patches and got some pretty good gains,
>> but also some seemingly odd behavior.
>>
>> These results are for a 5 minute test doing sequential 4KB requests from
>> fio using O_DIRECT, libaio, and IO depth of 1. The requests are
>> performed directly against the virtio-blk device (no filesystem) which
>> is backed by a 400GB NVme card.
>>
>> QEMU_AIO_POLL_MAX_NS IOPs
>> unset 31,383
>> 1 46,860
>> 2 46,440
>> 4 35,246
>> 8 34,973
>> 16 46,794
>> 32 46,729
>> 64 35,520
>> 128 45,902
>
> The environment variable is in nanoseconds. The range of values you
> tried are very small (all <1 usec). It would be interesting to try
> larger values in the ballpark of the latencies you have traced. For
> example 2000, 4000, 8000, 16000, and 32000 ns.
Agreed. As I alluded to in another post, I decided to start at 1 and
double the values until I saw a difference with the expectation that it
would have to get quite large before that happened. The results went in
a different direction, and then I got distracted by the variation at
certain points. I figured that by itself the fact that noticeable
improvements were possible with such low values was interesting.
I will definitely continue the progression and capture some larger values.
>
> Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> much CPU overhead.
>
>> I found the results for 4, 8, and 64 odd so I re-ran some tests to check
>> for consistency. I used values of 2 and 4 and ran each 5 times. Here
>> is what I got:
>>
>> Iteration QEMU_AIO_POLL_MAX_NS=2 QEMU_AIO_POLL_MAX_NS=4
>> 1 46,972 35,434
>> 2 46,939 35,719
>> 3 47,005 35,584
>> 4 47,016 35,615
>> 5 47,267 35,474
>>
>> So the results seem consistent.
>
> That is interesting. I don't have an explanation for the consistent
> difference between 2 and 4 ns polling time. The time difference is so
> small yet the IOPS difference is clear.
>
> Comparing traces could shed light on the cause for this difference.
>
>> I saw some discussion on the patches made which make me think you'll be
>> making some changes, is that right? If so, I may wait for the updates
>> and then we can run the much more exhaustive set of workloads
>> (sequential read and write, random read and write) at various block
>> sizes (4, 8, 16, 32, 64, 128, and 256) and multiple IO depths (1 and 32)
>> that we were doing when we started looking at this.
>
> I'll send an updated version of the patches.
>
> Stefan
>
--
Karl Rister <krister@redhat.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 14:59 ` Christian Borntraeger
@ 2016-11-14 16:52 ` Stefan Hajnoczi
0 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-14 16:52 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini, Fam Zheng, Karl Rister
[-- Attachment #1: Type: text/plain, Size: 2168 bytes --]
On Mon, Nov 14, 2016 at 03:59:32PM +0100, Christian Borntraeger wrote:
> On 11/09/2016 06:13 PM, Stefan Hajnoczi wrote:
> > Recent performance investigation work done by Karl Rister shows that the
> > guest->host notification takes around 20 us. This is more than the "overhead"
> > of QEMU itself (e.g. block layer).
> >
> > One way to avoid the costly exit is to use polling instead of notification.
> > The main drawback of polling is that it consumes CPU resources. In order to
> > benefit performance the host must have extra CPU cycles available on physical
> > CPUs that aren't used by the guest.
> >
> > This is an experimental AioContext polling implementation. It adds a polling
> > callback into the event loop. Polling functions are implemented for virtio-blk
> > virtqueue guest->host kick and Linux AIO completion.
> >
> > The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
> > poll before entering the usual blocking poll(2) syscall. Try setting this
> > variable to the time from old request completion to new virtqueue kick.
> >
> > By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
> > polling!
> >
> > Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> > values. If you don't find a good value we should double-check the tracing data
> > to see if this experimental code can be improved.
> >
> > Stefan Hajnoczi (3):
> > aio-posix: add aio_set_poll_handler()
> > virtio: poll virtqueues for new buffers
> > linux-aio: poll ring for completions
> >
> > aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > block/linux-aio.c | 17 +++++++
> > hw/virtio/virtio.c | 19 ++++++++
> > include/block/aio.h | 16 +++++++
> > 4 files changed, 185 insertions(+)
> >
>
> Another observation: With more iothreads than host CPUs the performance drops significantly.
This makes sense although we can eliminate it in common cases by only
polling when we actually need to monitor events.
The current series wastes CPU on Linux AIO polling when no requests are
pending ;).
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 14:51 ` Christian Borntraeger
@ 2016-11-14 16:53 ` Stefan Hajnoczi
0 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-14 16:53 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini, Fam Zheng, Karl Rister
[-- Attachment #1: Type: text/plain, Size: 2438 bytes --]
On Mon, Nov 14, 2016 at 03:51:18PM +0100, Christian Borntraeger wrote:
> On 11/09/2016 06:13 PM, Stefan Hajnoczi wrote:
> > Recent performance investigation work done by Karl Rister shows that the
> > guest->host notification takes around 20 us. This is more than the "overhead"
> > of QEMU itself (e.g. block layer).
> >
> > One way to avoid the costly exit is to use polling instead of notification.
> > The main drawback of polling is that it consumes CPU resources. In order to
> > benefit performance the host must have extra CPU cycles available on physical
> > CPUs that aren't used by the guest.
> >
> > This is an experimental AioContext polling implementation. It adds a polling
> > callback into the event loop. Polling functions are implemented for virtio-blk
> > virtqueue guest->host kick and Linux AIO completion.
> >
> > The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
> > poll before entering the usual blocking poll(2) syscall. Try setting this
> > variable to the time from old request completion to new virtqueue kick.
> >
> > By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
> > polling!
> >
> > Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
> > values. If you don't find a good value we should double-check the tracing data
> > to see if this experimental code can be improved.
> >
> > Stefan Hajnoczi (3):
> > aio-posix: add aio_set_poll_handler()
> > virtio: poll virtqueues for new buffers
> > linux-aio: poll ring for completions
> >
> > aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > block/linux-aio.c | 17 +++++++
> > hw/virtio/virtio.c | 19 ++++++++
> > include/block/aio.h | 16 +++++++
> > 4 files changed, 185 insertions(+)
>
> Hmm, I see all affected threads using more CPU power, but the performance numbers are
> somewhat inconclusive on s390. I have no proper test setup (only a shared LPAR), but
> all numbers are in the same ballpark of 3-5Gbyte/sec for 5 disks for 4k random reads
> with iodepth=8.
>
> What I find interesting is that the guest still does a huge amount of exits for the
> guest->host notifications. I think if we could combine this with some notification
> suppression, then things could be even more interesting.
Great idea. I'll add that to the next revision.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 14:52 ` Karl Rister
@ 2016-11-14 16:56 ` Stefan Hajnoczi
0 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-14 16:56 UTC (permalink / raw)
To: Karl Rister
Cc: Fam Zheng, Andrew Theurer, Paolo Bonzini, qemu-devel, Stefan Hajnoczi
[-- Attachment #1: Type: text/plain, Size: 2600 bytes --]
On Mon, Nov 14, 2016 at 08:52:21AM -0600, Karl Rister wrote:
> On 11/14/2016 07:53 AM, Fam Zheng wrote:
> > On Fri, 11/11 13:59, Karl Rister wrote:
> >>
> >> Stefan
> >>
> >> I ran some quick tests with your patches and got some pretty good gains,
> >> but also some seemingly odd behavior.
> >>
> >> These results are for a 5 minute test doing sequential 4KB requests from
> >> fio using O_DIRECT, libaio, and IO depth of 1. The requests are
> >> performed directly against the virtio-blk device (no filesystem) which
> >> is backed by a 400GB NVme card.
> >>
> >> QEMU_AIO_POLL_MAX_NS IOPs
> >> unset 31,383
> >> 1 46,860
> >> 2 46,440
> >> 4 35,246
> >> 8 34,973
> >> 16 46,794
> >> 32 46,729
> >> 64 35,520
> >> 128 45,902
> >
> > For sequential read with ioq=1, each request takes >20000ns under 45,000 IOPs.
> > Isn't a poll time of 128ns a mismatching order of magnitude? Have you tried
> > larger values? Not criticizing, just trying to understand how it workd.
>
> Not yet, I was just trying to get something out as quick as I could
> (while juggling this with some other stuff...). Frankly I was a bit
> surprised that the low values made such an impact and then got
> distracted by the behaviors of 4, 8, and 64.
>
> >
> > Also, do you happen to have numbers for unpatched QEMU (just to confirm that
> > "unset" case doesn't cause regression) and baremetal for comparison?
>
> I didn't run this exact test on the same qemu.git master changeset
> unpatched. I did however previously try it against the v2.7.0 tag and
> got somewhere around 27.5K IOPs. My original intention was to apply the
> patches to v2.7.0 but it wouldn't build.
>
> We have done a lot of testing and tracing on the qemu-rhev package and
> 27K IOPs is about what we see there (with tracing disabled).
>
> Given the patch discussions I saw I was mainly trying to get a sniff
> test out and then do a more complete workup with whatever updates are made.
>
> I should probably note that there are a lot of pinning optimizations
> made here to assist in our tracing efforts which also result in improved
> performance. Ultimately, in a proper evaluation of these patches most
> of that will be removed so the behavior may change somewhat.
To clarify: QEMU_AIO_POLL_MAX_NS unset or 0 disables polling completely.
Therefore it's not necessary to run unpatched.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 15:29 ` Paolo Bonzini
@ 2016-11-14 17:06 ` Stefan Hajnoczi
2016-11-14 17:13 ` Fam Zheng
2016-11-14 17:15 ` Paolo Bonzini
2016-11-16 8:27 ` Fam Zheng
1 sibling, 2 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-14 17:06 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Karl Rister, Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 1526 bytes --]
On Mon, Nov 14, 2016 at 04:29:49PM +0100, Paolo Bonzini wrote:
> On 14/11/2016 16:26, Stefan Hajnoczi wrote:
> > On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
> >> QEMU_AIO_POLL_MAX_NS IOPs
> >> unset 31,383
> >> 1 46,860
> >> 2 46,440
> >> 4 35,246
> >> 8 34,973
> >> 16 46,794
> >> 32 46,729
> >> 64 35,520
> >> 128 45,902
> >
> > The environment variable is in nanoseconds. The range of values you
> > tried are very small (all <1 usec). It would be interesting to try
> > larger values in the ballpark of the latencies you have traced. For
> > example 2000, 4000, 8000, 16000, and 32000 ns.
> >
> > Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> > much CPU overhead.
>
> That basically means "avoid a syscall if you already know there's
> something to do", so in retrospect it's not that surprising. Still
> interesting though, and it means that the feature is useful even if you
> don't have CPU to waste.
Can you spell out which syscall you mean? Reading the ioeventfd?
The benchmark uses virtio-blk dataplane and iodepth=1 so there shouldn't
be much IOThread event loop activity besides the single I/O request.
The reason this puzzles me is that I wouldn't expect poll to succeed
with QEMU_AIO_POLL_MAX_NS and iodepth=1.
Thanks,
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 17:06 ` Stefan Hajnoczi
@ 2016-11-14 17:13 ` Fam Zheng
2016-11-14 17:15 ` Paolo Bonzini
1 sibling, 0 replies; 29+ messages in thread
From: Fam Zheng @ 2016-11-14 17:13 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Paolo Bonzini, Karl Rister, Stefan Hajnoczi, qemu-devel, Andrew Theurer
On Mon, 11/14 17:06, Stefan Hajnoczi wrote:
> On Mon, Nov 14, 2016 at 04:29:49PM +0100, Paolo Bonzini wrote:
> > On 14/11/2016 16:26, Stefan Hajnoczi wrote:
> > > On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
> > >> QEMU_AIO_POLL_MAX_NS IOPs
> > >> unset 31,383
> > >> 1 46,860
> > >> 2 46,440
> > >> 4 35,246
> > >> 8 34,973
> > >> 16 46,794
> > >> 32 46,729
> > >> 64 35,520
> > >> 128 45,902
> > >
> > > The environment variable is in nanoseconds. The range of values you
> > > tried are very small (all <1 usec). It would be interesting to try
> > > larger values in the ballpark of the latencies you have traced. For
> > > example 2000, 4000, 8000, 16000, and 32000 ns.
> > >
> > > Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> > > much CPU overhead.
> >
> > That basically means "avoid a syscall if you already know there's
> > something to do", so in retrospect it's not that surprising. Still
> > interesting though, and it means that the feature is useful even if you
> > don't have CPU to waste.
>
> Can you spell out which syscall you mean? Reading the ioeventfd?
>
> The benchmark uses virtio-blk dataplane and iodepth=1 so there shouldn't
> be much IOThread event loop activity besides the single I/O request.
>
> The reason this puzzles me is that I wouldn't expect poll to succeed
> with QEMU_AIO_POLL_MAX_NS and iodepth=1.
I see the guest shouldn't send more requests, but isn't it possible for
the linux-aio poll to succeed?
Fam
>
> Thanks,
> Stefan
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 17:06 ` Stefan Hajnoczi
2016-11-14 17:13 ` Fam Zheng
@ 2016-11-14 17:15 ` Paolo Bonzini
2016-11-15 10:36 ` Stefan Hajnoczi
1 sibling, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-11-14 17:15 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Karl Rister, Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 908 bytes --]
On 14/11/2016 18:06, Stefan Hajnoczi wrote:
>>> > > Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
>>> > > much CPU overhead.
>> >
>> > That basically means "avoid a syscall if you already know there's
>> > something to do", so in retrospect it's not that surprising. Still
>> > interesting though, and it means that the feature is useful even if you
>> > don't have CPU to waste.
> Can you spell out which syscall you mean? Reading the ioeventfd?
I mean ppoll. If ppoll succeeds without ever going to sleep, you can
achieve the same result with QEMU_AIO_POLL_MAX_NS=1, but cheaper.
Paolo
> The benchmark uses virtio-blk dataplane and iodepth=1 so there shouldn't
> be much IOThread event loop activity besides the single I/O request.
>
> The reason this puzzles me is that I wouldn't expect poll to succeed
> with QEMU_AIO_POLL_MAX_NS and iodepth=1.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 15:26 ` Stefan Hajnoczi
2016-11-14 15:29 ` Paolo Bonzini
2016-11-14 15:36 ` Karl Rister
@ 2016-11-14 20:12 ` Karl Rister
2016-11-14 20:52 ` Paolo Bonzini
2 siblings, 1 reply; 29+ messages in thread
From: Karl Rister @ 2016-11-14 20:12 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Stefan Hajnoczi, qemu-devel, Andrew Theurer, Paolo Bonzini, Fam Zheng
On 11/14/2016 09:26 AM, Stefan Hajnoczi wrote:
> On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
>> On 11/09/2016 11:13 AM, Stefan Hajnoczi wrote:
>>> Recent performance investigation work done by Karl Rister shows that the
>>> guest->host notification takes around 20 us. This is more than the "overhead"
>>> of QEMU itself (e.g. block layer).
>>>
>>> One way to avoid the costly exit is to use polling instead of notification.
>>> The main drawback of polling is that it consumes CPU resources. In order to
>>> benefit performance the host must have extra CPU cycles available on physical
>>> CPUs that aren't used by the guest.
>>>
>>> This is an experimental AioContext polling implementation. It adds a polling
>>> callback into the event loop. Polling functions are implemented for virtio-blk
>>> virtqueue guest->host kick and Linux AIO completion.
>>>
>>> The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to
>>> poll before entering the usual blocking poll(2) syscall. Try setting this
>>> variable to the time from old request completion to new virtqueue kick.
>>>
>>> By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any
>>> polling!
>>>
>>> Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS
>>> values. If you don't find a good value we should double-check the tracing data
>>> to see if this experimental code can be improved.
>>
>> Stefan
>>
>> I ran some quick tests with your patches and got some pretty good gains,
>> but also some seemingly odd behavior.
>>
>> These results are for a 5 minute test doing sequential 4KB requests from
>> fio using O_DIRECT, libaio, and IO depth of 1. The requests are
>> performed directly against the virtio-blk device (no filesystem) which
>> is backed by a 400GB NVme card.
>>
>> QEMU_AIO_POLL_MAX_NS IOPs
>> unset 31,383
>> 1 46,860
>> 2 46,440
>> 4 35,246
>> 8 34,973
>> 16 46,794
>> 32 46,729
>> 64 35,520
>> 128 45,902
>
> The environment variable is in nanoseconds. The range of values you
> tried are very small (all <1 usec). It would be interesting to try
> larger values in the ballpark of the latencies you have traced. For
> example 2000, 4000, 8000, 16000, and 32000 ns.
Here are some more numbers with higher values. I continued the power of
2 values and added in your examples as well:
QEMU_AIO_POLL_MAX_NS IOPs
256 46,929
512 35,627
1,024 46,477
2,000 35,247
2,048 46,322
4,000 46,540
4,096 46,368
8,000 47,054
8,192 46,671
16,000 46,466
16,384 32,504
32,000 20,620
32,768 20,807
>
> Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> much CPU overhead.
>
>> I found the results for 4, 8, and 64 odd so I re-ran some tests to check
>> for consistency. I used values of 2 and 4 and ran each 5 times. Here
>> is what I got:
>>
>> Iteration QEMU_AIO_POLL_MAX_NS=2 QEMU_AIO_POLL_MAX_NS=4
>> 1 46,972 35,434
>> 2 46,939 35,719
>> 3 47,005 35,584
>> 4 47,016 35,615
>> 5 47,267 35,474
>>
>> So the results seem consistent.
>
> That is interesting. I don't have an explanation for the consistent
> difference between 2 and 4 ns polling time. The time difference is so
> small yet the IOPS difference is clear.
>
> Comparing traces could shed light on the cause for this difference.
>
>> I saw some discussion on the patches made which make me think you'll be
>> making some changes, is that right? If so, I may wait for the updates
>> and then we can run the much more exhaustive set of workloads
>> (sequential read and write, random read and write) at various block
>> sizes (4, 8, 16, 32, 64, 128, and 256) and multiple IO depths (1 and 32)
>> that we were doing when we started looking at this.
>
> I'll send an updated version of the patches.
>
> Stefan
>
--
Karl Rister <krister@redhat.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 20:12 ` Karl Rister
@ 2016-11-14 20:52 ` Paolo Bonzini
2016-11-15 10:32 ` Stefan Hajnoczi
0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-11-14 20:52 UTC (permalink / raw)
To: krister, Stefan Hajnoczi
Cc: Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
On 14/11/2016 21:12, Karl Rister wrote:
> 256 46,929
> 512 35,627
> 1,024 46,477
> 2,000 35,247
> 2,048 46,322
> 4,000 46,540
> 4,096 46,368
> 8,000 47,054
> 8,192 46,671
> 16,000 46,466
> 16,384 32,504
> 32,000 20,620
> 32,768 20,807
Huh, it breaks down exactly when it should start going faster
(10^9/46000 = ~21000).
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 20:52 ` Paolo Bonzini
@ 2016-11-15 10:32 ` Stefan Hajnoczi
2016-11-15 18:45 ` Karl Rister
0 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-15 10:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: krister, Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 1012 bytes --]
On Mon, Nov 14, 2016 at 09:52:00PM +0100, Paolo Bonzini wrote:
> On 14/11/2016 21:12, Karl Rister wrote:
> > 256 46,929
> > 512 35,627
> > 1,024 46,477
> > 2,000 35,247
> > 2,048 46,322
> > 4,000 46,540
> > 4,096 46,368
> > 8,000 47,054
> > 8,192 46,671
> > 16,000 46,466
> > 16,384 32,504
> > 32,000 20,620
> > 32,768 20,807
>
> Huh, it breaks down exactly when it should start going faster
> (10^9/46000 = ~21000).
Could it be because we're not breaking the polling loop for BHs, new
timers, or aio_notify()?
Once that is fixed polling should achieve maximum performance when
QEMU_AIO_MAX_POLL_NS is at least as long as the duration of a request.
This is logical if there are enough pinned CPUs so the polling thread
can run flat out.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 17:15 ` Paolo Bonzini
@ 2016-11-15 10:36 ` Stefan Hajnoczi
0 siblings, 0 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2016-11-15 10:36 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Karl Rister, Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
[-- Attachment #1: Type: text/plain, Size: 1131 bytes --]
On Mon, Nov 14, 2016 at 06:15:46PM +0100, Paolo Bonzini wrote:
>
>
> On 14/11/2016 18:06, Stefan Hajnoczi wrote:
> >>> > > Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> >>> > > much CPU overhead.
> >> >
> >> > That basically means "avoid a syscall if you already know there's
> >> > something to do", so in retrospect it's not that surprising. Still
> >> > interesting though, and it means that the feature is useful even if you
> >> > don't have CPU to waste.
> > Can you spell out which syscall you mean? Reading the ioeventfd?
>
> I mean ppoll. If ppoll succeeds without ever going to sleep, you can
> achieve the same result with QEMU_AIO_POLL_MAX_NS=1, but cheaper.
It's not obvious to me that ioeventfd or Linux AIO will become ready
with QEMU_AIO_POLL_MAX_NS=1.
This benchmark is iodepth=1 so there's just a single request. Fam
suggested that maybe Linux AIO is ready immediately but AFAIK this NVMe
device should still take a few microseconds to complete a request
whereas our polling time is 1 nanosecond.
Tracing would reveal what is going on here.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-15 10:32 ` Stefan Hajnoczi
@ 2016-11-15 18:45 ` Karl Rister
0 siblings, 0 replies; 29+ messages in thread
From: Karl Rister @ 2016-11-15 18:45 UTC (permalink / raw)
To: Stefan Hajnoczi, Paolo Bonzini
Cc: Stefan Hajnoczi, qemu-devel, Andrew Theurer, Fam Zheng
On 11/15/2016 04:32 AM, Stefan Hajnoczi wrote:
> On Mon, Nov 14, 2016 at 09:52:00PM +0100, Paolo Bonzini wrote:
>> On 14/11/2016 21:12, Karl Rister wrote:
>>> 256 46,929
>>> 512 35,627
>>> 1,024 46,477
>>> 2,000 35,247
>>> 2,048 46,322
>>> 4,000 46,540
>>> 4,096 46,368
>>> 8,000 47,054
>>> 8,192 46,671
>>> 16,000 46,466
>>> 16,384 32,504
>>> 32,000 20,620
>>> 32,768 20,807
>>
>> Huh, it breaks down exactly when it should start going faster
>> (10^9/46000 = ~21000).
>
> Could it be because we're not breaking the polling loop for BHs, new
> timers, or aio_notify()?
>
> Once that is fixed polling should achieve maximum performance when
> QEMU_AIO_MAX_POLL_NS is at least as long as the duration of a request.
>
> This is logical if there are enough pinned CPUs so the polling thread
> can run flat out.
>
I removed all the pinning and restored the guest to a "normal"
configuration.
QEMU_AIO_POLL_MAX_NS IOPs
unset 25,553
1 28,684
2 38,213
4 29,413
8 38,612
16 30,578
32 30,145
64 41,637
128 28,554
256 29,661
512 39,178
1,024 29,644
2,048 37,190
4,096 29,838
8,192 38,581
16,384 37,793
32,768 20,332
65,536 35,755
--
Karl Rister <krister@redhat.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler()
2016-11-09 17:13 ` [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler() Stefan Hajnoczi
2016-11-09 17:30 ` Paolo Bonzini
@ 2016-11-15 20:14 ` Fam Zheng
1 sibling, 0 replies; 29+ messages in thread
From: Fam Zheng @ 2016-11-15 20:14 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, Paolo Bonzini, Karl Rister
On Wed, 11/09 17:13, Stefan Hajnoczi wrote:
> +struct AioPollHandler {
> + QLIST_ENTRY(AioPollHandler) node;
> +
> + AioPollFn *poll_fn; /* check whether to invoke io_fn() */
> + IOHandler *io_fn; /* handler callback */
> + void *opaque; /* user-defined argument to callbacks */
> +
> + bool deleted;
> +};
<...>
> + } else { /* add or update */
> + if (!node) {
> + node = g_new(AioPollHandler, 1);
> + QLIST_INSERT_HEAD(&ctx->aio_poll_handlers, node, node);
> + }
> +
> + node->poll_fn = poll_fn;
> + node->io_fn = io_fn;
> + node->opaque = opaque;
Ouch, "deleted" is not initialzed and may cause the node to be removed at next
run_poll_handlers() call! :(
This is the cause of the jumpy numbers I saw, with it fixed I expect the
behavior will be much more consistent.
Fam
> + }
> +
> + aio_notify(ctx);
> +}
> +
> +
> bool aio_prepare(AioContext *ctx)
> {
> + /* TODO run poll handlers? */
> return false;
> }
>
> @@ -400,6 +467,47 @@ static void add_pollfd(AioHandler *node)
> npfd++;
> }
>
> +static bool run_poll_handlers(AioContext *ctx)
> +{
> + int64_t start_time;
> + unsigned int loop_count = 0;
> + bool fired = false;
> +
> + /* Is there any polling to be done? */
> + if (!QLIST_FIRST(&ctx->aio_poll_handlers)) {
> + return false;
> + }
> +
> + start_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
> + while (!fired) {
> + AioPollHandler *node;
> + AioPollHandler *tmp;
> +
> + QLIST_FOREACH_SAFE(node, &ctx->aio_poll_handlers, node, tmp) {
> + ctx->walking_poll_handlers++;
> + if (!node->deleted && node->poll_fn(node->opaque)) {
> + node->io_fn(node->opaque);
> + fired = true;
> + }
> + ctx->walking_poll_handlers--;
> +
> + if (!ctx->walking_poll_handlers && node->deleted) {
> + QLIST_REMOVE(node, node);
> + g_free(node);
> + }
> + }
> +
> + loop_count++;
> + if ((loop_count % 1024) == 0 &&
> + qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start_time >
> + aio_poll_max_ns) {
> + break;
> + }
> + }
> +
> + return fired;
> +}
> +
> bool aio_poll(AioContext *ctx, bool blocking)
> {
> AioHandler *node;
> @@ -410,6 +518,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
> aio_context_acquire(ctx);
> progress = false;
>
> + if (aio_poll_max_ns &&
> + /* see qemu_soonest_timeout() uint64_t hack */
> + (uint64_t)aio_compute_timeout(ctx) > (uint64_t)aio_poll_max_ns) {
> + if (run_poll_handlers(ctx)) {
> + progress = true;
> + blocking = false; /* poll again, don't block */
> + }
> + }
> +
> /* aio_notify can avoid the expensive event_notifier_set if
> * everything (file descriptors, bottom halves, timers) will
> * be re-evaluated before the next blocking poll(). This is
> @@ -484,6 +601,22 @@ bool aio_poll(AioContext *ctx, bool blocking)
>
> void aio_context_setup(AioContext *ctx)
> {
> + if (!aio_poll_max_ns) {
> + int64_t val;
> + const char *env_str = getenv("QEMU_AIO_POLL_MAX_NS");
> +
> + if (!env_str) {
> + env_str = "0";
> + }
> +
> + if (!qemu_strtoll(env_str, NULL, 10, &val)) {
> + aio_poll_max_ns = val;
> + } else {
> + fprintf(stderr, "Unable to parse QEMU_AIO_POLL_MAX_NS "
> + "environment variable\n");
> + }
> + }
> +
> #ifdef CONFIG_EPOLL_CREATE1
> assert(!ctx->epollfd);
> ctx->epollfd = epoll_create1(EPOLL_CLOEXEC);
> diff --git a/include/block/aio.h b/include/block/aio.h
> index c7ae27c..2be1955 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -42,8 +42,10 @@ void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs,
> void qemu_aio_unref(void *p);
> void qemu_aio_ref(void *p);
>
> +typedef struct AioPollHandler AioPollHandler;
> typedef struct AioHandler AioHandler;
> typedef void QEMUBHFunc(void *opaque);
> +typedef bool AioPollFn(void *opaque);
> typedef void IOHandler(void *opaque);
>
> struct ThreadPool;
> @@ -64,6 +66,15 @@ struct AioContext {
> */
> int walking_handlers;
>
> + /* The list of registered AIO poll handlers */
> + QLIST_HEAD(, AioPollHandler) aio_poll_handlers;
> +
> + /* This is a simple lock used to protect the aio_poll_handlers list.
> + * Specifically, it's used to ensure that no callbacks are removed while
> + * we're walking and dispatching callbacks.
> + */
> + int walking_poll_handlers;
> +
> /* Used to avoid unnecessary event_notifier_set calls in aio_notify;
> * accessed with atomic primitives. If this field is 0, everything
> * (file descriptors, bottom halves, timers) will be re-evaluated
> @@ -327,6 +338,11 @@ void aio_set_fd_handler(AioContext *ctx,
> IOHandler *io_write,
> void *opaque);
>
> +void aio_set_poll_handler(AioContext *ctx,
> + AioPollFn *poll_fn,
> + IOHandler *io_fn,
> + void *opaque);
> +
> /* Register an event notifier and associated callbacks. Behaves very similarly
> * to event_notifier_set_handler. Unlike event_notifier_set_handler, these callbacks
> * will be invoked when using aio_poll().
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode
2016-11-14 15:29 ` Paolo Bonzini
2016-11-14 17:06 ` Stefan Hajnoczi
@ 2016-11-16 8:27 ` Fam Zheng
1 sibling, 0 replies; 29+ messages in thread
From: Fam Zheng @ 2016-11-16 8:27 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Stefan Hajnoczi, Karl Rister, Stefan Hajnoczi, qemu-devel,
Andrew Theurer
On Mon, 11/14 16:29, Paolo Bonzini wrote:
>
>
> On 14/11/2016 16:26, Stefan Hajnoczi wrote:
> > On Fri, Nov 11, 2016 at 01:59:25PM -0600, Karl Rister wrote:
> >> QEMU_AIO_POLL_MAX_NS IOPs
> >> unset 31,383
> >> 1 46,860
> >> 2 46,440
> >> 4 35,246
> >> 8 34,973
> >> 16 46,794
> >> 32 46,729
> >> 64 35,520
> >> 128 45,902
> >
> > The environment variable is in nanoseconds. The range of values you
> > tried are very small (all <1 usec). It would be interesting to try
> > larger values in the ballpark of the latencies you have traced. For
> > example 2000, 4000, 8000, 16000, and 32000 ns.
> >
> > Very interesting that QEMU_AIO_POLL_MAX_NS=1 performs so well without
> > much CPU overhead.
>
> That basically means "avoid a syscall if you already know there's
> something to do", so in retrospect it's not that surprising. Still
> interesting though, and it means that the feature is useful even if you
> don't have CPU to waste.
With the "deleted" bug fixed I did a little more testing to understand this.
Setting QEMU_AIO_POLL_MAX_NS=1 doesn't mean run_poll_handlers() will only loop
for 1 ns - the patch only checks at every 1024 polls. The first poll in a
run_poll_handlers() call can hardly succeed, so we poll at least 1024 times.
According to my test, on average each run_poll_handlers() takes ~12000ns, which
is ~160 iterations of the poll loop, before geting a new event (either from
virtio queue or linux-aio, I don't have the ratio here).
So in the worse case (no new event), 1024 iterations is basically (12000 / 160 *
1024) = 76800 ns!
The above is with iodepth=1 and jobs=1. With iodepth=32 and jobs=1, or
iodepth=8 and jobs=4, the numbers are ~30th poll with 5600ns.
Fam
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2016-11-16 8:27 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-09 17:13 [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 1/3] aio-posix: add aio_set_poll_handler() Stefan Hajnoczi
2016-11-09 17:30 ` Paolo Bonzini
2016-11-10 10:17 ` Stefan Hajnoczi
2016-11-10 13:20 ` Paolo Bonzini
2016-11-15 20:14 ` Fam Zheng
2016-11-09 17:13 ` [Qemu-devel] [RFC 2/3] virtio: poll virtqueues for new buffers Stefan Hajnoczi
2016-11-09 17:13 ` [Qemu-devel] [RFC 3/3] linux-aio: poll ring for completions Stefan Hajnoczi
2016-11-11 19:59 ` [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode Karl Rister
2016-11-14 13:53 ` Fam Zheng
2016-11-14 14:52 ` Karl Rister
2016-11-14 16:56 ` Stefan Hajnoczi
2016-11-14 15:26 ` Stefan Hajnoczi
2016-11-14 15:29 ` Paolo Bonzini
2016-11-14 17:06 ` Stefan Hajnoczi
2016-11-14 17:13 ` Fam Zheng
2016-11-14 17:15 ` Paolo Bonzini
2016-11-15 10:36 ` Stefan Hajnoczi
2016-11-16 8:27 ` Fam Zheng
2016-11-14 15:36 ` Karl Rister
2016-11-14 20:12 ` Karl Rister
2016-11-14 20:52 ` Paolo Bonzini
2016-11-15 10:32 ` Stefan Hajnoczi
2016-11-15 18:45 ` Karl Rister
2016-11-13 6:20 ` no-reply
2016-11-14 14:51 ` Christian Borntraeger
2016-11-14 16:53 ` Stefan Hajnoczi
2016-11-14 14:59 ` Christian Borntraeger
2016-11-14 16:52 ` Stefan Hajnoczi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.