* [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication
@ 2017-05-12 8:35 Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
` (17 more replies)
0 siblings, 18 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
This patch series implements vhost-pci, which is a point-to-point based
inter-vm communication solution. The QEMU side implementation includes the
vhost-user extension, vhost-pci device emulation and management, and inter-VM
notification.
v1->v2 changes:
1) inter-VM notification support;
2) vhost-pci-net ctrlq message format change;
3) patch re-org and code cleanup.
Wei Wang (16):
vhost-user: share the vhost-user protocol related structures
vl: add the vhost-pci-slave command line option
vhost-pci-slave: create a vhost-user slave to support vhost-pci
vhost-pci-net: add vhost-pci-net
vhost-pci-net-pci: add vhost-pci-net-pci
virtio: add inter-vm notification support
vhost-user: send device id to the slave
vhost-user: send guest physical address of virtqueues to the slave
vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP
vhost-pci-net: send the negotiated feature bits to the master
vhost-user: add asynchronous read for the vhost-user master
vhost-user: handling VHOST_USER_SET_FEATURES
vhost-pci-slave: add "reset_virtio"
vhost-pci-slave: add support to delete a vhost-pci device
vhost-pci-net: tell the driver that it is ready to send packets
vl: enable vhost-pci-slave
hw/net/Makefile.objs | 2 +-
hw/net/vhost-pci-net.c | 364 +++++++++++++
hw/net/vhost_net.c | 39 ++
hw/virtio/Makefile.objs | 7 +-
hw/virtio/vhost-pci-slave.c | 676 +++++++++++++++++++++++++
hw/virtio/vhost-stub.c | 22 +
hw/virtio/vhost-user.c | 192 +++----
hw/virtio/vhost.c | 63 ++-
hw/virtio/virtio-bus.c | 19 +-
hw/virtio/virtio-pci.c | 96 +++-
hw/virtio/virtio-pci.h | 16 +
hw/virtio/virtio.c | 32 +-
include/hw/pci/pci.h | 1 +
include/hw/virtio/vhost-backend.h | 2 +
include/hw/virtio/vhost-pci-net.h | 40 ++
include/hw/virtio/vhost-pci-slave.h | 64 +++
include/hw/virtio/vhost-user.h | 110 ++++
include/hw/virtio/vhost.h | 3 +
include/hw/virtio/virtio.h | 2 +
include/net/vhost-user.h | 22 +-
include/net/vhost_net.h | 2 +
include/standard-headers/linux/vhost_pci_net.h | 74 +++
include/standard-headers/linux/virtio_ids.h | 1 +
net/vhost-user.c | 37 +-
qemu-options.hx | 4 +
vl.c | 46 ++
26 files changed, 1796 insertions(+), 140 deletions(-)
create mode 100644 hw/net/vhost-pci-net.c
create mode 100644 hw/virtio/vhost-pci-slave.c
create mode 100644 include/hw/virtio/vhost-pci-net.h
create mode 100644 include/hw/virtio/vhost-pci-slave.h
create mode 100644 include/hw/virtio/vhost-user.h
create mode 100644 include/standard-headers/linux/vhost_pci_net.h
--
2.7.4
^ permalink raw reply [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
` (16 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Put the vhost-user protocol related data structures to vhost-user.h,
so that they can be used in other implementations (e.g. a slave
implementation).
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/vhost-user.c | 89 +---------------------------------------
include/hw/virtio/vhost-user.h | 93 ++++++++++++++++++++++++++++++++++++++++++
include/net/vhost-user.h | 4 +-
3 files changed, 96 insertions(+), 90 deletions(-)
create mode 100644 include/hw/virtio/vhost-user.h
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 9334a8a..d161884 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -12,6 +12,7 @@
#include "qapi/error.h"
#include "hw/virtio/vhost.h"
#include "hw/virtio/vhost-backend.h"
+#include "hw/virtio/vhost-user.h"
#include "hw/virtio/virtio-net.h"
#include "sysemu/char.h"
#include "sysemu/kvm.h"
@@ -22,94 +23,6 @@
#include <sys/ioctl.h>
#include <sys/socket.h>
#include <sys/un.h>
-#include <linux/vhost.h>
-
-#define VHOST_MEMORY_MAX_NREGIONS 8
-#define VHOST_USER_F_PROTOCOL_FEATURES 30
-
-enum VhostUserProtocolFeature {
- VHOST_USER_PROTOCOL_F_MQ = 0,
- VHOST_USER_PROTOCOL_F_LOG_SHMFD = 1,
- VHOST_USER_PROTOCOL_F_RARP = 2,
- VHOST_USER_PROTOCOL_F_REPLY_ACK = 3,
- VHOST_USER_PROTOCOL_F_NET_MTU = 4,
-
- VHOST_USER_PROTOCOL_F_MAX
-};
-
-#define VHOST_USER_PROTOCOL_FEATURE_MASK ((1 << VHOST_USER_PROTOCOL_F_MAX) - 1)
-
-typedef enum VhostUserRequest {
- VHOST_USER_NONE = 0,
- VHOST_USER_GET_FEATURES = 1,
- VHOST_USER_SET_FEATURES = 2,
- VHOST_USER_SET_OWNER = 3,
- VHOST_USER_RESET_OWNER = 4,
- VHOST_USER_SET_MEM_TABLE = 5,
- VHOST_USER_SET_LOG_BASE = 6,
- VHOST_USER_SET_LOG_FD = 7,
- VHOST_USER_SET_VRING_NUM = 8,
- VHOST_USER_SET_VRING_ADDR = 9,
- VHOST_USER_SET_VRING_BASE = 10,
- VHOST_USER_GET_VRING_BASE = 11,
- VHOST_USER_SET_VRING_KICK = 12,
- VHOST_USER_SET_VRING_CALL = 13,
- VHOST_USER_SET_VRING_ERR = 14,
- VHOST_USER_GET_PROTOCOL_FEATURES = 15,
- VHOST_USER_SET_PROTOCOL_FEATURES = 16,
- VHOST_USER_GET_QUEUE_NUM = 17,
- VHOST_USER_SET_VRING_ENABLE = 18,
- VHOST_USER_SEND_RARP = 19,
- VHOST_USER_NET_SET_MTU = 20,
- VHOST_USER_MAX
-} VhostUserRequest;
-
-typedef struct VhostUserMemoryRegion {
- uint64_t guest_phys_addr;
- uint64_t memory_size;
- uint64_t userspace_addr;
- uint64_t mmap_offset;
-} VhostUserMemoryRegion;
-
-typedef struct VhostUserMemory {
- uint32_t nregions;
- uint32_t padding;
- VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS];
-} VhostUserMemory;
-
-typedef struct VhostUserLog {
- uint64_t mmap_size;
- uint64_t mmap_offset;
-} VhostUserLog;
-
-typedef struct VhostUserMsg {
- VhostUserRequest request;
-
-#define VHOST_USER_VERSION_MASK (0x3)
-#define VHOST_USER_REPLY_MASK (0x1<<2)
-#define VHOST_USER_NEED_REPLY_MASK (0x1 << 3)
- uint32_t flags;
- uint32_t size; /* the following payload size */
- union {
-#define VHOST_USER_VRING_IDX_MASK (0xff)
-#define VHOST_USER_VRING_NOFD_MASK (0x1<<8)
- uint64_t u64;
- struct vhost_vring_state state;
- struct vhost_vring_addr addr;
- VhostUserMemory memory;
- VhostUserLog log;
- } payload;
-} QEMU_PACKED VhostUserMsg;
-
-static VhostUserMsg m __attribute__ ((unused));
-#define VHOST_USER_HDR_SIZE (sizeof(m.request) \
- + sizeof(m.flags) \
- + sizeof(m.size))
-
-#define VHOST_USER_PAYLOAD_SIZE (sizeof(m) - VHOST_USER_HDR_SIZE)
-
-/* The version of the protocol we support */
-#define VHOST_USER_VERSION (0x1)
static bool ioeventfd_enabled(void)
{
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
new file mode 100644
index 0000000..766a950
--- /dev/null
+++ b/include/hw/virtio/vhost-user.h
@@ -0,0 +1,93 @@
+#ifndef VHOST_USER_H
+#define VHOST_USER_H
+
+#include <linux/vhost.h>
+
+#define VHOST_MEMORY_MAX_NREGIONS 8
+#define VHOST_USER_F_PROTOCOL_FEATURES 30
+
+enum VhostUserProtocolFeature {
+ VHOST_USER_PROTOCOL_F_MQ = 0,
+ VHOST_USER_PROTOCOL_F_LOG_SHMFD = 1,
+ VHOST_USER_PROTOCOL_F_RARP = 2,
+ VHOST_USER_PROTOCOL_F_REPLY_ACK = 3,
+ VHOST_USER_PROTOCOL_F_NET_MTU = 4,
+
+ VHOST_USER_PROTOCOL_F_MAX
+};
+
+#define VHOST_USER_PROTOCOL_FEATURE_MASK ((1 << VHOST_USER_PROTOCOL_F_MAX) - 1)
+
+typedef enum VhostUserRequest {
+ VHOST_USER_NONE = 0,
+ VHOST_USER_GET_FEATURES = 1,
+ VHOST_USER_SET_FEATURES = 2,
+ VHOST_USER_SET_OWNER = 3,
+ VHOST_USER_RESET_OWNER = 4,
+ VHOST_USER_SET_MEM_TABLE = 5,
+ VHOST_USER_SET_LOG_BASE = 6,
+ VHOST_USER_SET_LOG_FD = 7,
+ VHOST_USER_SET_VRING_NUM = 8,
+ VHOST_USER_SET_VRING_ADDR = 9,
+ VHOST_USER_SET_VRING_BASE = 10,
+ VHOST_USER_GET_VRING_BASE = 11,
+ VHOST_USER_SET_VRING_KICK = 12,
+ VHOST_USER_SET_VRING_CALL = 13,
+ VHOST_USER_SET_VRING_ERR = 14,
+ VHOST_USER_GET_PROTOCOL_FEATURES = 15,
+ VHOST_USER_SET_PROTOCOL_FEATURES = 16,
+ VHOST_USER_GET_QUEUE_NUM = 17,
+ VHOST_USER_SET_VRING_ENABLE = 18,
+ VHOST_USER_SEND_RARP = 19,
+ VHOST_USER_NET_SET_MTU = 20,
+ VHOST_USER_MAX
+} VhostUserRequest;
+
+typedef struct VhostUserMemoryRegion {
+ uint64_t guest_phys_addr;
+ uint64_t memory_size;
+ uint64_t userspace_addr;
+ uint64_t mmap_offset;
+} VhostUserMemoryRegion;
+
+typedef struct VhostUserMemory {
+ uint32_t nregions;
+ uint32_t padding;
+ VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS];
+} VhostUserMemory;
+
+typedef struct VhostUserLog {
+ uint64_t mmap_size;
+ uint64_t mmap_offset;
+} VhostUserLog;
+
+typedef struct VhostUserMsg {
+ VhostUserRequest request;
+
+#define VHOST_USER_VERSION_MASK (0x3)
+#define VHOST_USER_REPLY_MASK (0x1 << 2)
+#define VHOST_USER_NEED_REPLY_MASK (0x1 << 3)
+ uint32_t flags;
+ uint32_t size; /* the following payload size */
+ union {
+#define VHOST_USER_VRING_IDX_MASK (0xff)
+#define VHOST_USER_VRING_NOFD_MASK (0x1 << 8)
+ uint64_t u64;
+ struct vhost_vring_state state;
+ struct vhost_vring_addr addr;
+ VhostUserMemory memory;
+ VhostUserLog log;
+ } payload;
+} QEMU_PACKED VhostUserMsg;
+
+static VhostUserMsg m __attribute__ ((unused));
+#define VHOST_USER_HDR_SIZE (sizeof(m.request) \
+ + sizeof(m.flags) \
+ + sizeof(m.size))
+
+#define VHOST_USER_PAYLOAD_SIZE (sizeof(m) - VHOST_USER_HDR_SIZE)
+
+/* The version of the protocol we support */
+#define VHOST_USER_VERSION (0x1)
+
+#endif
diff --git a/include/net/vhost-user.h b/include/net/vhost-user.h
index 5bcd8a6..d9e328d 100644
--- a/include/net/vhost-user.h
+++ b/include/net/vhost-user.h
@@ -8,8 +8,8 @@
*
*/
-#ifndef VHOST_USER_H
-#define VHOST_USER_H
+#ifndef NET_VHOST_USER_H
+#define NET_VHOST_USER_H
struct vhost_net;
struct vhost_net *vhost_user_get_vhost_net(NetClientState *nc);
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
` (15 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
An example of the QEMU command to create a vhost-pci-slave is:
-chardev socket,id=slave1,server,wait=off,path=${PATH_SLAVE1}
-vhost-pci-slave socket,chardev=slave1
The master can connect to the slave as usual:
-chardev socket,id=sock2,path=${PATH_SLAVE1}
-netdev type=vhost-user,id=net2,chardev=sock2,vhostforce
-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=net2
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
qemu-options.hx | 4 ++++
vl.c | 22 ++++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/qemu-options.hx b/qemu-options.hx
index 99af8ed..a795850 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -4152,6 +4152,10 @@ contents of @code{iv.b64} to the second secret
ETEXI
+DEF("vhost-pci-slave", HAS_ARG, QEMU_OPTION_vhost_pci_slave,
+ "-vhost-pci-slave socket,chrdev={id}\n"
+ " creates a vhost-pci-slave",
+ QEMU_ARCH_I386)
HXCOMM This is the last statement. Insert new options before this line!
STEXI
diff --git a/vl.c b/vl.c
index 0b4ed52..2ee4713 100644
--- a/vl.c
+++ b/vl.c
@@ -540,6 +540,20 @@ static QemuOptsList qemu_fw_cfg_opts = {
},
};
+static QemuOptsList qemu_vhost_pci_slave_opts = {
+ .name = "vhost-pci-slave",
+ .implied_opt_name = "chardev",
+ .head = QTAILQ_HEAD_INITIALIZER(qemu_vhost_pci_slave_opts.head),
+ .desc = {
+ /*
+ * no elements => accept any
+ * sanity checking will happen later
+ * when setting device properties
+ */
+ { /* end of list */ }
+ },
+};
+
/**
* Get machine options
*
@@ -3027,6 +3041,7 @@ int main(int argc, char **argv, char **envp)
qemu_add_opts(&qemu_icount_opts);
qemu_add_opts(&qemu_semihosting_config_opts);
qemu_add_opts(&qemu_fw_cfg_opts);
+ qemu_add_opts(&qemu_vhost_pci_slave_opts);
module_call_init(MODULE_INIT_OPTS);
runstate_init();
@@ -4039,6 +4054,13 @@ int main(int argc, char **argv, char **envp)
exit(1);
}
break;
+ case QEMU_OPTION_vhost_pci_slave:
+ opts = qemu_opts_parse_noisily(
+ qemu_find_opts("vhost-pci-slave"), optarg, false);
+ if (!opts) {
+ exit(1);
+ }
+ break;
default:
os_parse_cmd_args(popt->index, optarg);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
` (14 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/Makefile.objs | 1 +
hw/virtio/vhost-pci-slave.c | 597 ++++++++++++++++++++++++++++++++++++
include/hw/virtio/vhost-pci-slave.h | 61 ++++
include/hw/virtio/vhost-user.h | 13 +
4 files changed, 672 insertions(+)
create mode 100644 hw/virtio/vhost-pci-slave.c
create mode 100644 include/hw/virtio/vhost-pci-slave.h
diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 765d363..5e81f2f 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -3,6 +3,7 @@ common-obj-y += virtio-rng.o
common-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
common-obj-y += virtio-bus.o
common-obj-y += virtio-mmio.o
+common-obj-y += vhost-pci-slave.o
obj-y += virtio.o virtio-balloon.o
obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
diff --git a/hw/virtio/vhost-pci-slave.c b/hw/virtio/vhost-pci-slave.c
new file mode 100644
index 0000000..464afa3
--- /dev/null
+++ b/hw/virtio/vhost-pci-slave.c
@@ -0,0 +1,597 @@
+/*
+ * Vhost-pci Slave
+ *
+ * Copyright Intel Corp. 2017
+ *
+ * Authors:
+ * Wei Wang <wei.w.wang@intel.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include <qemu/osdep.h>
+#include <qemu/sockets.h>
+
+#include "monitor/qdev.h"
+#include "qapi/error.h"
+#include "qemu/config-file.h"
+#include "qemu/error-report.h"
+#include "hw/virtio/virtio-pci.h"
+#include "hw/virtio/vhost-pci-slave.h"
+#include "hw/virtio/vhost-user.h"
+
+/*
+ * The basic feature bits for all vhost-pci devices. It will be or-ed
+ * with a device specific features(e.g. VHOST_PCI_NET_FEATURE_BITS),
+ * defined below.
+ */
+#define VHOST_PCI_FEATURE_BITS (1ULL << VIRTIO_F_VERSION_1)
+
+/*
+ * The device features here are sent to the remote virtio-net device for
+ * a negotiation first. Then the remotely negotiated features are given
+ * to the vhost-pci-net device to negotiate with its driver.
+ */
+#define VHOST_PCI_NET_FEATURE_BITS ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
+ (1ULL << VIRTIO_NET_F_CTRL_VQ) | \
+ (1ULL << VIRTIO_NET_F_MQ))
+
+VhostPCISlave *vp_slave;
+
+/* Clean up VhostPCIDev */
+static void vp_dev_cleanup(void)
+{
+ int ret;
+ uint32_t i, nregions;
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ Remoteq *remoteq;
+
+ /*
+ * Normally, the pointer shoud have pointed to the slave device's vdev.
+ * Otherwise, it means that no vhost-pci device has been created yet.
+ * In this case, just return.
+ */
+ if (!vp_dev->vdev) {
+ return;
+ }
+
+ nregions = vp_dev->remote_mem_num;
+ for (i = 0; i < nregions; i++) {
+ ret = munmap(vp_dev->mr_map_base[i], vp_dev->mr_map_size[i]);
+ if (ret < 0) {
+ error_report("%s: failed to unmap mr[%d]", __func__, i);
+ continue;
+ }
+ memory_region_del_subregion(vp_dev->bar_mr, vp_dev->sub_mr + i);
+ }
+
+ if (!QLIST_EMPTY(&vp_dev->remoteq_list)) {
+ QLIST_FOREACH(remoteq, &vp_dev->remoteq_list, node)
+ g_free(remoteq);
+ }
+ QLIST_INIT(&vp_dev->remoteq_list);
+ vp_dev->remoteq_num = 0;
+ vp_dev->vdev = NULL;
+}
+
+static int vp_slave_write(CharBackend *chr_be, VhostUserMsg *msg)
+{
+ int size;
+
+ if (!msg) {
+ return 0;
+ }
+
+ /* The payload size has been assigned, plus the header size here */
+ size = msg->size + VHOST_USER_HDR_SIZE;
+ msg->flags &= ~VHOST_USER_VERSION_MASK;
+ msg->flags |= VHOST_USER_VERSION;
+
+ return qemu_chr_fe_write_all(chr_be, (const uint8_t *)msg, size)
+ == size ? 0 : -1;
+}
+
+static int vp_slave_get_features(CharBackend *chr_be, VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
+ /* Offer the initial features, which have the protocol feature bit set */
+ msg->payload.u64 = vp_dev->feature_bits;
+ msg->size = sizeof(msg->payload.u64);
+ msg->flags |= VHOST_USER_REPLY_MASK;
+
+ return vp_slave_write(chr_be, msg);
+}
+
+static void vp_slave_set_features(VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
+ /*
+ * Get the remotely negotiated feature bits. They will be later taken by
+ * the vhost-pci device to negotiate with its driver. Clear the protocol
+ * feature bit, which is useless for the device and driver negotiation.
+ */
+ vp_dev->feature_bits = msg->payload.u64 &
+ ~(1 << VHOST_USER_F_PROTOCOL_FEATURES);
+}
+
+static void vp_slave_event(void *opaque, int event)
+{
+ switch (event) {
+ case CHR_EVENT_OPENED:
+ break;
+ case CHR_EVENT_CLOSED:
+ break;
+ }
+}
+
+static int vp_slave_get_protocol_features(CharBackend *chr_be,
+ VhostUserMsg *msg)
+{
+ msg->payload.u64 = VHOST_USER_PROTOCOL_FEATURES;
+ msg->size = sizeof(msg->payload.u64);
+ msg->flags |= VHOST_USER_REPLY_MASK;
+
+ return vp_slave_write(chr_be, msg);
+}
+
+static void vp_slave_set_device_type(VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ vp_dev->dev_type = (uint16_t)msg->payload.u64;
+
+ switch (vp_dev->dev_type) {
+ case VIRTIO_ID_NET:
+ vp_dev->feature_bits |= VHOST_PCI_FEATURE_BITS |
+ VHOST_PCI_NET_FEATURE_BITS;
+ break;
+ default:
+ error_report("%s: device type %d is not supported",
+ __func__, vp_dev->dev_type);
+ }
+}
+
+static int vp_slave_get_queue_num(CharBackend *chr_be, VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
+ switch (vp_dev->dev_type) {
+ case VIRTIO_ID_NET:
+ msg->payload.u64 = VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX;
+ break;
+ default:
+ error_report("%s: device type %d is not supported", __func__,
+ vp_dev->dev_type);
+ return -1;
+ }
+ msg->size = sizeof(msg->payload.u64);
+ msg->flags |= VHOST_USER_REPLY_MASK;
+
+ return vp_slave_write(chr_be, msg);
+}
+
+/* Calculate the memory size of all the regions */
+static uint64_t vp_slave_peer_mem_size_get(VhostUserMemory *mem)
+{
+ int i;
+ uint64_t total_size = 0;
+ uint32_t nregions = mem->nregions;
+ VhostUserMemoryRegion *mem_regions = mem->regions;
+
+ for (i = 0; i < nregions; i++) {
+ total_size += mem_regions[i].memory_size;
+ }
+
+ return total_size;
+}
+
+/* Prepare the memory for the vhost-pci device bar */
+static int vp_slave_set_mem_table(VhostUserMsg *msg, int *fds, int fd_num)
+{
+ VhostUserMemory *mem = &msg->payload.memory;
+ VhostUserMemoryRegion *mem_region = mem->regions;
+ uint32_t i, nregions = mem->nregions;
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ vp_dev->remote_mem_num = nregions;
+ MemoryRegion *bar_mr, *sub_mr;
+ uint64_t bar_size, bar_map_offset = 0;
+ RemoteMem *rmem;
+ void *mr_qva;
+
+ /* Sanity Check */
+ if (fd_num != nregions) {
+ error_report("%s: fd num doesn't match region num", __func__);
+ return -1;
+ }
+
+ if (!vp_dev->bar_mr) {
+ vp_dev->bar_mr = g_malloc(sizeof(MemoryRegion));
+ }
+ if (!vp_dev->sub_mr) {
+ vp_dev->sub_mr = g_malloc(nregions * sizeof(MemoryRegion));
+ }
+ bar_mr = vp_dev->bar_mr;
+ sub_mr = vp_dev->sub_mr;
+
+ bar_size = vp_slave_peer_mem_size_get(mem);
+ bar_size = pow2ceil(bar_size);
+ memory_region_init(bar_mr, NULL, "RemoteMemory", bar_size);
+ for (i = 0; i < nregions; i++) {
+ vp_dev->mr_map_size[i] = mem_region[i].memory_size +
+ mem_region[i].mmap_offset;
+ /*
+ * Map the remote memory by QEMU. They will then be exposed to the
+ * guest via a vhost-pci device BAR. The mapped base addr and size
+ * are recorded for cleanup() to use.
+ */
+ vp_dev->mr_map_base[i] = mmap(NULL, vp_dev->mr_map_size[i],
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fds[i], 0);
+ if (vp_dev->mr_map_base[i] == MAP_FAILED) {
+ error_report("%s: map peer memory region %d failed", __func__, i);
+ return -1;
+ }
+
+ mr_qva = vp_dev->mr_map_base[i] + mem_region[i].mmap_offset;
+ /*
+ * The BAR MMIO is different from the traditional one, because the
+ * memory is set up as a regular RAM. Guest will be able to directly
+ * access it, just like accessing its RAM memory.
+ */
+ memory_region_init_ram_ptr(&sub_mr[i], NULL, "RemoteMemory",
+ mem_region[i].memory_size, mr_qva);
+ /*
+ * The remote memory regions, which are scattered in the remote VM's
+ * address space, are put continuous in the BAR.
+ */
+ memory_region_add_subregion(bar_mr, bar_map_offset, &sub_mr[i]);
+ bar_map_offset += mem_region[i].memory_size;
+ rmem = &vp_dev->remote_mem[i];
+ rmem->gpa = mem_region[i].guest_phys_addr;
+ rmem->size = mem_region[i].memory_size;
+ }
+ vp_dev->bar_map_offset = bar_map_offset;
+
+ return 0;
+}
+
+static void vp_slave_alloc_remoteq(void)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
+ Remoteq *remoteq = g_malloc0(sizeof(Remoteq));
+ /*
+ * Put the new allocated remoteq to the list, because we don't know how
+ * many remoteq the remote device will send to us. So, when they sent one,
+ * insert it to the list.
+ */
+ QLIST_INSERT_HEAD(&vp_dev->remoteq_list, remoteq, node);
+ vp_dev->remoteq_num++;
+}
+
+static void vp_slave_set_vring_num(VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ /*
+ * The info (vring_num, base etc) is sent for last remoteq, which was put
+ * on the first of the list and have not been filled with those info.
+ */
+ Remoteq *remoteq = QLIST_FIRST(&vp_dev->remoteq_list);
+
+ remoteq->vring_num = msg->payload.u64;
+}
+
+static void vp_slave_set_vring_base(VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ Remoteq *remoteq = QLIST_FIRST(&vp_dev->remoteq_list);
+
+ remoteq->last_avail_idx = msg->payload.u64;
+}
+
+static int vp_slave_get_vring_base(CharBackend *chr_be, VhostUserMsg *msg)
+{
+ msg->flags |= VHOST_USER_REPLY_MASK;
+ msg->size = sizeof(m.payload.state);
+ /* Send back the last_avail_idx, which is 0 here */
+ msg->payload.state.num = 0;
+
+ return vp_slave_write(chr_be, msg);
+}
+
+static void vp_slave_set_vring_addr(VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ Remoteq *remoteq = QLIST_FIRST(&vp_dev->remoteq_list);
+ memcpy(&remoteq->addr, &msg->payload.addr,
+ sizeof(struct vhost_vring_addr));
+}
+
+static void vp_slave_set_vring_kick(int fd)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ Remoteq *remoteq = QLIST_FIRST(&vp_dev->remoteq_list);
+ if (remoteq) {
+ remoteq->kickfd = fd;
+ }
+}
+
+static void vp_slave_set_vring_call(int fd)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ Remoteq *remoteq = QLIST_FIRST(&vp_dev->remoteq_list);
+ if (remoteq) {
+ remoteq->callfd = fd;
+ }
+}
+
+static void vp_slave_set_vring_enable(VhostUserMsg *msg)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ struct vhost_vring_state *state = &msg->payload.state;
+ Remoteq *remoteq;
+ QLIST_FOREACH(remoteq, &vp_dev->remoteq_list, node) {
+ if (remoteq->vring_num == state->index) {
+ remoteq->enabled = (int)state->num;
+ break;
+ }
+ }
+}
+
+static int vp_slave_device_create(uint16_t virtio_id)
+{
+ Error *local_err = NULL;
+ QemuOpts *opts;
+ DeviceState *dev;
+ char params[50];
+
+ switch (virtio_id) {
+ case VIRTIO_ID_NET:
+ strcpy(params, "driver=vhost-pci-net-pci,id=vhost-pci-0");
+ break;
+ default:
+ error_report("%s: device type %d not supported", __func__, virtio_id);
+ }
+
+ opts = qemu_opts_parse_noisily(qemu_find_opts("device"), params, true);
+ dev = qdev_device_add(opts, &local_err);
+ if (!dev) {
+ qemu_opts_del(opts);
+ return -1;
+ }
+ object_unref(OBJECT(dev));
+ return 0;
+}
+
+static int vp_slave_set_vhost_pci(VhostUserMsg *msg)
+{
+ int ret = 0;
+ uint8_t cmd = (uint8_t)msg->payload.u64;
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
+ switch (cmd) {
+ case VHOST_USER_SET_VHOST_PCI_START:
+ ret = vp_slave_device_create(vp_dev->dev_type);
+ if (ret < 0) {
+ return ret;
+ }
+ break;
+ case VHOST_USER_SET_VHOST_PCI_STOP:
+ break;
+ default:
+ error_report("%s: cmd %d not supported", __func__, cmd);
+ return -1;
+ }
+
+ return ret;
+}
+
+static int vp_slave_can_read(void *opaque)
+{
+ return VHOST_USER_HDR_SIZE;
+}
+
+static void vp_slave_read(void *opaque, const uint8_t *buf, int size)
+{
+ int ret, fd_num, fds[MAX_GUEST_REGION];
+ VhostUserMsg msg;
+ uint8_t *p = (uint8_t *) &msg;
+ CharBackend *chr_be = (CharBackend *)opaque;
+
+ if (size != VHOST_USER_HDR_SIZE) {
+ error_report("%s: wrong message size received %d", __func__, size);
+ return;
+ }
+
+ memcpy(p, buf, VHOST_USER_HDR_SIZE);
+
+ if (msg.size) {
+ p += VHOST_USER_HDR_SIZE;
+ size = qemu_chr_fe_read_all(chr_be, p, msg.size);
+ if (size != msg.size) {
+ error_report("%s: wrong message size received %d != %d", __func__,
+ size, msg.size);
+ return;
+ }
+ }
+
+ if (msg.request > VHOST_USER_MAX) {
+ error_report("%s: read an incorrect msg %d", __func__, msg.request);
+ return;
+ }
+
+ switch (msg.request) {
+ case VHOST_USER_GET_FEATURES:
+ ret = vp_slave_get_features(chr_be, &msg);
+ if (ret < 0) {
+ goto err_handling;
+ }
+ break;
+ case VHOST_USER_SET_FEATURES:
+ vp_slave_set_features(&msg);
+ break;
+ case VHOST_USER_GET_PROTOCOL_FEATURES:
+ ret = vp_slave_get_protocol_features(chr_be, &msg);
+ if (ret < 0) {
+ goto err_handling;
+ }
+ break;
+ case VHOST_USER_SET_PROTOCOL_FEATURES:
+ break;
+ case VHOST_USER_SET_DEVICE_ID:
+ /*
+ * Now, we know the remote device type. Make the related device feature
+ * bits ready. The remote device will ask for it soon.
+ */
+ vp_slave_set_device_type(&msg);
+ break;
+ case VHOST_USER_GET_QUEUE_NUM:
+ ret = vp_slave_get_queue_num(chr_be, &msg);
+ if (ret < 0) {
+ goto err_handling;
+ }
+ break;
+ case VHOST_USER_SET_OWNER:
+ break;
+ case VHOST_USER_SET_MEM_TABLE:
+ /*
+ * Currently, we don't support adding more memory to the vhost-pci
+ * device after the device is realized in QEMU. So, just "break" here
+ * in this case.
+ */
+ if (vp_slave->vp_dev->vdev) {
+ break;
+ }
+ fd_num = qemu_chr_fe_get_msgfds(chr_be, fds, sizeof(fds) / sizeof(int));
+ vp_slave_set_mem_table(&msg, fds, fd_num);
+ break;
+ case VHOST_USER_SET_VRING_NUM:
+ /*
+ * This is the first message about a remoteq. Other messages (e.g. BASE,
+ * ADDR, KICK etc) will follow this message and come soon. So, allocate
+ * a Rqmotq structure here, and ready to record info about the remoteq
+ * from the upcoming messages.
+ */
+ vp_slave_alloc_remoteq();
+ vp_slave_set_vring_num(&msg);
+ break;
+ case VHOST_USER_SET_VRING_BASE:
+ vp_slave_set_vring_base(&msg);
+ break;
+ case VHOST_USER_GET_VRING_BASE:
+ ret = vp_slave_get_vring_base(chr_be, &msg);
+ if (ret < 0) {
+ goto err_handling;
+ }
+ break;
+ case VHOST_USER_SET_VRING_ADDR:
+ vp_slave_set_vring_addr(&msg);
+ break;
+ case VHOST_USER_SET_VRING_KICK:
+ /* Consume the fd */
+ qemu_chr_fe_get_msgfds(chr_be, fds, 1);
+ vp_slave_set_vring_kick(fds[0]);
+ /*
+ * This is a non-blocking eventfd.
+ * The receive function forces it to be blocking,
+ * so revert it back to non-blocking.
+ */
+ qemu_set_nonblock(fds[0]);
+ break;
+ case VHOST_USER_SET_VRING_CALL:
+ /* Consume the fd */
+ qemu_chr_fe_get_msgfds(chr_be, fds, 1);
+ vp_slave_set_vring_call(fds[0]);
+ /*
+ * This is a non-blocking eventfd.
+ * The receive function forces it to be blocking,
+ * so revert it back to non-blocking.
+ */
+ qemu_set_nonblock(fds[0]);
+ break;
+ case VHOST_USER_SET_VRING_ENABLE:
+ vp_slave_set_vring_enable(&msg);
+ break;
+ case VHOST_USER_SET_LOG_BASE:
+ break;
+ case VHOST_USER_SET_LOG_FD:
+ qemu_chr_fe_get_msgfds(chr_be, fds, 1);
+ close(fds[0]);
+ break;
+ case VHOST_USER_SEND_RARP:
+ break;
+ case VHOST_USER_SET_VHOST_PCI:
+ ret = vp_slave_set_vhost_pci(&msg);
+ if (ret < 0) {
+ goto err_handling;
+ }
+ break;
+ default:
+ error_report("vhost-pci-slave does not support msg request = %d",
+ msg.request);
+ break;
+ }
+ return;
+
+err_handling:
+ error_report("%s: handle request %d failed", __func__, msg.request);
+}
+
+static Chardev *vp_slave_parse_chardev(const char *id)
+{
+ Chardev *chr = qemu_chr_find(id);
+ if (!chr) {
+ error_report("chardev \"%s\" not found", id);
+ return NULL;
+ }
+
+ return chr;
+}
+
+static void vp_dev_init(VhostPCIDev *vp_dev)
+{
+ vp_dev->feature_bits = 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
+ vp_dev->bar_mr = NULL;
+ vp_dev->sub_mr = NULL;
+ vp_dev->vdev = NULL;
+ QLIST_INIT(&vp_dev->remoteq_list);
+ vp_dev->remoteq_num = 0;
+}
+
+int vhost_pci_slave_init(QemuOpts *opts)
+{
+ Chardev *chr;
+ VhostPCIDev *vp_dev;
+ const char *chardev_id = qemu_opt_get(opts, "chardev");
+
+ vp_slave = g_malloc(sizeof(VhostPCISlave));
+ chr = vp_slave_parse_chardev(chardev_id);
+ if (!chr) {
+ return -1;
+ }
+ vp_dev = g_malloc(sizeof(VhostPCIDev));
+ vp_dev_init(vp_dev);
+ vp_slave->vp_dev = vp_dev;
+
+ qemu_chr_fe_init(&vp_slave->chr_be, chr, &error_abort);
+ qemu_chr_fe_set_handlers(&vp_slave->chr_be, vp_slave_can_read,
+ vp_slave_read, vp_slave_event,
+ &vp_slave->chr_be, NULL, true);
+
+ return 0;
+}
+
+int vhost_pci_slave_cleanup(void)
+{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
+ vp_dev_cleanup();
+ qemu_chr_fe_deinit(&vp_slave->chr_be);
+ g_free(vp_dev->sub_mr);
+ g_free(vp_dev->bar_mr);
+ g_free(vp_dev);
+
+ return 0;
+}
diff --git a/include/hw/virtio/vhost-pci-slave.h b/include/hw/virtio/vhost-pci-slave.h
new file mode 100644
index 0000000..b5bf02a
--- /dev/null
+++ b/include/hw/virtio/vhost-pci-slave.h
@@ -0,0 +1,61 @@
+#ifndef QEMU_VHOST_PCI_SLAVE_H
+#define QEMU_VHOST_PCI_SLAVE_H
+
+#include "linux-headers/linux/vhost.h"
+
+#include "sysemu/char.h"
+#include "exec/memory.h"
+
+typedef struct Remoteq {
+ uint16_t last_avail_idx;
+ uint32_t vring_num;
+ int kickfd;
+ int callfd;
+ int enabled;
+ struct vhost_vring_addr addr;
+ QLIST_ENTRY(Remoteq) node;
+} Remoteq;
+
+typedef struct RemoteMem {
+ uint64_t gpa;
+ uint64_t size;
+} RemoteMem;
+
+#define MAX_GUEST_REGION 8
+/*
+ * The basic vhost-pci device struct.
+ * It is set up by vhost-pci-slave, and shared to the device emulation.
+ */
+typedef struct VhostPCIDev {
+ /* Ponnter to the slave device */
+ VirtIODevice *vdev;
+ uint16_t dev_type;
+ uint64_t feature_bits;
+ /* Records the end (offset to the BAR) of the last mapped region */
+ uint64_t bar_map_offset;
+ /* The MemoryRegion that will be registered with a vhost-pci device BAR */
+ MemoryRegion *bar_mr;
+ /* Add to the bar MemoryRegion */
+ MemoryRegion *sub_mr;
+ void *mr_map_base[MAX_GUEST_REGION];
+ uint64_t mr_map_size[MAX_GUEST_REGION];
+
+ uint16_t remote_mem_num;
+ RemoteMem remote_mem[MAX_GUEST_REGION];
+ uint16_t remoteq_num;
+ QLIST_HEAD(, Remoteq) remoteq_list;
+} VhostPCIDev;
+
+/* Currenltly, a slave supports the creation of only one vhost-pci device */
+typedef struct VhostPCISlave {
+ VhostPCIDev *vp_dev;
+ CharBackend chr_be;
+} VhostPCISlave;
+
+extern int vhost_pci_slave_init(QemuOpts *opts);
+
+extern int vhost_pci_slave_cleanup(void);
+
+VhostPCIDev *get_vhost_pci_dev(void);
+
+#endif
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 766a950..1fccbe2 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -12,11 +12,22 @@ enum VhostUserProtocolFeature {
VHOST_USER_PROTOCOL_F_RARP = 2,
VHOST_USER_PROTOCOL_F_REPLY_ACK = 3,
VHOST_USER_PROTOCOL_F_NET_MTU = 4,
+ VHOST_USER_PROTOCOL_F_VHOST_PCI = 5,
+ VHOST_USER_PROTOCOL_F_SET_DEVICE_ID = 6,
VHOST_USER_PROTOCOL_F_MAX
};
#define VHOST_USER_PROTOCOL_FEATURE_MASK ((1 << VHOST_USER_PROTOCOL_F_MAX) - 1)
+#define VHOST_USER_PROTOCOL_FEATURES ((1ULL << VHOST_USER_PROTOCOL_F_MQ) | \
+ (1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD) | \
+ (1ULL << VHOST_USER_PROTOCOL_F_RARP) | \
+ (1ULL << VHOST_USER_PROTOCOL_F_VHOST_PCI) | \
+ (1ULL << VHOST_USER_PROTOCOL_F_SET_DEVICE_ID))
+
+/* Commands sent to start or stop the vhost-pci device */
+#define VHOST_USER_SET_VHOST_PCI_START 0
+#define VHOST_USER_SET_VHOST_PCI_STOP 1
typedef enum VhostUserRequest {
VHOST_USER_NONE = 0,
@@ -40,6 +51,8 @@ typedef enum VhostUserRequest {
VHOST_USER_SET_VRING_ENABLE = 18,
VHOST_USER_SEND_RARP = 19,
VHOST_USER_NET_SET_MTU = 20,
+ VHOST_USER_SET_DEVICE_ID = 21,
+ VHOST_USER_SET_VHOST_PCI = 22,
VHOST_USER_MAX
} VhostUserRequest;
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (2 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
` (13 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Add the vhost-pci-net device emulation.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost-pci-net.c | 248 +++++++++++++++++++++++++
hw/virtio/vhost-pci-slave.c | 5 +
include/hw/virtio/vhost-pci-net.h | 34 ++++
include/standard-headers/linux/vhost_pci_net.h | 74 ++++++++
4 files changed, 361 insertions(+)
create mode 100644 hw/net/vhost-pci-net.c
create mode 100644 include/hw/virtio/vhost-pci-net.h
create mode 100644 include/standard-headers/linux/vhost_pci_net.h
diff --git a/hw/net/vhost-pci-net.c b/hw/net/vhost-pci-net.c
new file mode 100644
index 0000000..8e194ba
--- /dev/null
+++ b/hw/net/vhost-pci-net.c
@@ -0,0 +1,248 @@
+/*
+ * vhost-pci-net support
+ *
+ * Copyright Intel, Inc. 2016
+ *
+ * Authors:
+ * Wei Wang <wei.w.wang@intel.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/iov.h"
+#include "qemu/error-report.h"
+#include "hw/virtio/virtio-access.h"
+#include "hw/virtio/vhost-pci-net.h"
+
+#define VPNET_CTRLQ_SIZE 32
+#define VPNET_VQ_SIZE 256
+
+static void vpnet_handle_vq(VirtIODevice *vdev, VirtQueue *vq)
+{
+}
+
+static void vpnet_handle_ctrlq(VirtIODevice *vdev, VirtQueue *vq)
+{
+}
+
+/* Send a ctrlq message to the driver */
+static size_t vpnet_send_ctrlq_msg(VhostPCINet *vpnet,
+ struct vpnet_ctrlq_msg *msg)
+{
+ VirtQueueElement *elem;
+ VirtQueue *vq;
+ size_t msg_len = msg->size;
+
+ vq = vpnet->ctrlq;
+ if (!virtio_queue_ready(vq)) {
+ return 0;
+ }
+
+ elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
+ if (!elem) {
+ return 0;
+ }
+
+ iov_from_buf(elem->in_sg, elem->in_num, 0, msg, msg_len);
+
+ virtqueue_push(vq, elem, msg_len);
+ virtio_notify(VIRTIO_DEVICE(vpnet), vq);
+ g_free(elem);
+
+ return msg_len;
+}
+
+/* Send a ctrlq message of the remote memory to the driver */
+static void vpnet_send_ctrlq_msg_remote_mem(VhostPCINet *vpnet)
+{
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
+ struct vpnet_ctrlq_msg *msg;
+ uint16_t payload_size, msg_size;
+
+ payload_size = vp_dev->remote_mem_num *
+ sizeof(struct ctrlq_msg_remote_mem);
+ msg_size = payload_size + VPNET_CTRLQ_MSG_HDR_SIZE;
+ msg = g_malloc(msg_size);
+ msg->class = VHOST_PCI_CTRLQ_MSG_REMOTE_MEM;
+ msg->size = msg_size;
+ memcpy(msg->payload.msg_remote_mem, vp_dev->remote_mem, payload_size);
+ vpnet_send_ctrlq_msg(vpnet, msg);
+ g_free(msg);
+}
+
+static void vpnet_ctrlq_msg_remoteq_add_one(struct vpnet_ctrlq_msg *msg,
+ Remoteq *remoteq)
+{
+ uint32_t vring_num = remoteq->vring_num;
+ struct ctrlq_msg_remoteq *msg_remoteq;
+
+ msg_remoteq = &msg->payload.msg_remoteq[vring_num];
+ msg_remoteq->last_avail_idx = remoteq->last_avail_idx;
+ msg_remoteq->vring_num = vring_num;
+ msg_remoteq->vring_enable = remoteq->enabled;
+ msg_remoteq->desc_gpa = remoteq->addr.desc_user_addr;
+ msg_remoteq->avail_gpa = remoteq->addr.avail_user_addr;
+ msg_remoteq->used_gpa = remoteq->addr.used_user_addr;
+}
+
+/* Send a ctrlq message of the remoteq info to the driver */
+static void vpnet_send_ctrlq_msg_remoteq(VhostPCINet *vpnet)
+{
+ Remoteq *remoteq;
+ struct vpnet_ctrlq_msg *msg;
+ uint16_t remoteq_num, msg_size;
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
+
+ remoteq_num = vp_dev->remoteq_num;
+ msg_size = sizeof(struct ctrlq_msg_remoteq) * remoteq_num +
+ VPNET_CTRLQ_MSG_HDR_SIZE;
+ msg = g_malloc(msg_size);
+ msg->class = VHOST_PCI_CTRLQ_MSG_REMOTEQ;
+ msg->size = msg_size;
+
+ QLIST_FOREACH(remoteq, &vp_dev->remoteq_list, node) {
+ /* Get remoteqs from the list, and fill them into the ctrlq_msg */
+ vpnet_ctrlq_msg_remoteq_add_one(msg, remoteq);
+ }
+
+ vpnet_send_ctrlq_msg(vpnet, msg);
+ g_free(msg);
+}
+
+static void vpnet_set_status(struct VirtIODevice *vdev, uint8_t status)
+{
+ VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
+ uint16_t vq_num = vpnet->vq_pairs * 2;
+ BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
+ VirtioBusState *vbus = VIRTIO_BUS(qbus);
+ VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ VirtQueue *vq;
+ int r, i;
+
+ /* Send the ctrlq messages to the driver when the ctrlq is ready */
+ if (status & VIRTIO_CONFIG_S_DRIVER_OK) {
+ vpnet_send_ctrlq_msg_remote_mem(vpnet);
+ vpnet_send_ctrlq_msg_remoteq(vpnet);
+ }
+}
+
+static uint64_t vpnet_get_features(VirtIODevice *vdev, uint64_t features,
+ Error **errp)
+{
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
+
+ /*
+ * Give the driver the feature bits that have been negotiated with the
+ * remote device.
+ */
+ return vp_dev->feature_bits;
+}
+
+static void vpnet_set_features(VirtIODevice *vdev, uint64_t features)
+{
+}
+
+static void vpnet_get_config(VirtIODevice *vdev, uint8_t *config)
+{
+ VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
+ struct vhost_pci_net_config netcfg;
+
+ virtio_stw_p(vdev, &netcfg.status, vpnet->status);
+ virtio_stw_p(vdev, &netcfg.vq_pairs, vpnet->vq_pairs);
+ memcpy(config, &netcfg, vpnet->config_size);
+}
+
+static void vpnet_set_config(VirtIODevice *vdev, const uint8_t *config)
+{
+}
+
+static void vpnet_device_realize(DeviceState *dev, Error **errp)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
+ uint16_t i, vq_num;
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
+
+ vq_num = vp_dev->remoteq_num;
+ vpnet->vq_pairs = vq_num / 2;
+ virtio_init(vdev, "vhost-pci-net", VIRTIO_ID_VHOST_PCI_NET,
+ vpnet->config_size);
+
+ /* Add local vqs */
+ for (i = 0; i < vq_num; i++) {
+ virtio_add_queue(vdev, VPNET_VQ_SIZE, vpnet_handle_vq);
+ }
+ /* Add the ctrlq */
+ vpnet->ctrlq = virtio_add_queue(vdev, VPNET_CTRLQ_SIZE, vpnet_handle_ctrlq);
+
+ vpnet->status = 0;
+ vp_dev->vdev = vdev;
+}
+
+static void vpnet_device_unrealize(DeviceState *dev, Error **errp)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
+ uint16_t i, vq_num = vpnet->vq_pairs * 2;
+
+ /* Delete the datapath vqs and the ctrlq */
+ for (i = 0; i < vq_num + 1; i++) {
+ virtio_del_queue(vdev, i);
+ }
+}
+
+static void vpnet_reset(VirtIODevice *vdev)
+{
+}
+
+static Property vpnet_properties[] = {
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void vpnet_instance_init(Object *obj)
+{
+ VhostPCINet *vpnet = VHOST_PCI_NET(obj);
+
+ /*
+ * The default config_size is sizeof(struct vhost_pci_net_config).
+ * Can be overriden with vpnet_set_config_size.
+ */
+ vpnet->config_size = sizeof(struct vhost_pci_net_config);
+}
+
+static void vpnet_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+
+ dc->props = vpnet_properties;
+ set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
+ vdc->realize = vpnet_device_realize;
+ vdc->unrealize = vpnet_device_unrealize;
+ vdc->get_config = vpnet_get_config;
+ vdc->set_config = vpnet_set_config;
+ vdc->get_features = vpnet_get_features;
+ vdc->set_features = vpnet_set_features;
+ vdc->set_status = vpnet_set_status;
+ vdc->reset = vpnet_reset;
+}
+
+static const TypeInfo vpnet_info = {
+ .name = TYPE_VHOST_PCI_NET,
+ .parent = TYPE_VIRTIO_DEVICE,
+ .instance_size = sizeof(VhostPCINet),
+ .instance_init = vpnet_instance_init,
+ .class_init = vpnet_class_init,
+};
+
+static void virtio_register_types(void)
+{
+ type_register_static(&vpnet_info);
+}
+
+type_init(virtio_register_types)
diff --git a/hw/virtio/vhost-pci-slave.c b/hw/virtio/vhost-pci-slave.c
index 464afa3..ab1d06b 100644
--- a/hw/virtio/vhost-pci-slave.c
+++ b/hw/virtio/vhost-pci-slave.c
@@ -39,6 +39,11 @@
VhostPCISlave *vp_slave;
+VhostPCIDev *get_vhost_pci_dev(void)
+{
+ return vp_slave->vp_dev;
+}
+
/* Clean up VhostPCIDev */
static void vp_dev_cleanup(void)
{
diff --git a/include/hw/virtio/vhost-pci-net.h b/include/hw/virtio/vhost-pci-net.h
new file mode 100644
index 0000000..e3a1c8b
--- /dev/null
+++ b/include/hw/virtio/vhost-pci-net.h
@@ -0,0 +1,34 @@
+/*
+ * Virtio Network Device
+ *
+ * Copyright Intel, Corp. 2016
+ *
+ * Authors:
+ * Wei Wang <wei.w.wang@intel.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef _QEMU_VHOST_PCI_NET_H
+#define _QEMU_VHOST_PCI_NET_H
+
+#include "standard-headers/linux/vhost_pci_net.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/vhost-pci-slave.h"
+
+#define TYPE_VHOST_PCI_NET "vhost-pci-net-device"
+#define VHOST_PCI_NET(obj) \
+ OBJECT_CHECK(VhostPCINet, (obj), TYPE_VHOST_PCI_NET)
+
+typedef struct VhostPCINet {
+ VirtIODevice parent_obj;
+ VirtQueue *ctrlq;
+ uint16_t status;
+ uint16_t vq_pairs;
+ size_t config_size;
+ uint64_t device_features;
+} VhostPCINet;
+
+#endif
diff --git a/include/standard-headers/linux/vhost_pci_net.h b/include/standard-headers/linux/vhost_pci_net.h
new file mode 100644
index 0000000..bd8e09f
--- /dev/null
+++ b/include/standard-headers/linux/vhost_pci_net.h
@@ -0,0 +1,74 @@
+#ifndef _LINUX_VHOST_PCI_NET_H
+#define _LINUX_VHOST_PCI_NET_H
+
+/* This header is BSD licensed so anyone can use the definitions to implement
+ * compatible drivers/servers.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of Intel nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL Intel OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE. */
+
+#include "standard-headers/linux/virtio_ids.h"
+
+#define VPNET_S_LINK_UP 1 /* Link is up */
+
+struct vhost_pci_net_config {
+ /*
+ * Legal values are between 1 and 0x8000
+ */
+ uint16_t vq_pairs;
+ /* See VPNET_S_* above */
+ uint16_t status;
+} QEMU_PACKED;
+
+struct ctrlq_msg_remote_mem {
+ uint64_t gpa;
+ uint64_t size;
+};
+
+struct ctrlq_msg_remoteq {
+ uint16_t last_avail_idx;
+ int32_t vring_enable;
+ uint32_t vring_num;
+ uint64_t desc_gpa;
+ uint64_t avail_gpa;
+ uint64_t used_gpa;
+};
+
+#define VHOST_PCI_CTRLQ_MSG_REMOTE_MEM 0
+#define VHOST_PCI_CTRLQ_MSG_REMOTEQ 1
+struct vpnet_ctrlq_msg {
+ uint8_t class;
+ uint8_t cmd;
+ uint16_t size;
+ union {
+ struct ctrlq_msg_remote_mem msg_remote_mem[0];
+ struct ctrlq_msg_remoteq msg_remoteq[0];
+ } payload;
+} __attribute__((packed));
+
+static struct vpnet_ctrlq_msg vpnet_msg __attribute__ ((unused));
+#define VPNET_CTRLQ_MSG_HDR_SIZE (sizeof(vpnet_msg.class) \
+ + sizeof(vpnet_msg.cmd) \
+ + sizeof(vpnet_msg.size))
+
+#endif
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (3 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
` (12 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/Makefile.objs | 2 +-
hw/net/vhost-pci-net.c | 6 ----
hw/virtio/virtio-pci.c | 54 +++++++++++++++++++++++++++++
hw/virtio/virtio-pci.h | 14 ++++++++
include/hw/pci/pci.h | 1 +
include/standard-headers/linux/virtio_ids.h | 1 +
6 files changed, 71 insertions(+), 7 deletions(-)
diff --git a/hw/net/Makefile.objs b/hw/net/Makefile.objs
index 6a95d92..3b218b0 100644
--- a/hw/net/Makefile.objs
+++ b/hw/net/Makefile.objs
@@ -33,7 +33,7 @@ obj-$(CONFIG_MILKYMIST) += milkymist-minimac2.o
obj-$(CONFIG_PSERIES) += spapr_llan.o
obj-$(CONFIG_XILINX_ETHLITE) += xilinx_ethlite.o
-obj-$(CONFIG_VIRTIO) += virtio-net.o
+obj-$(CONFIG_VIRTIO) += virtio-net.o vhost-pci-net.o
obj-y += vhost_net.o
obj-$(CONFIG_ETSEC) += fsl_etsec/etsec.o fsl_etsec/registers.o \
diff --git a/hw/net/vhost-pci-net.c b/hw/net/vhost-pci-net.c
index 8e194ba..e36803a 100644
--- a/hw/net/vhost-pci-net.c
+++ b/hw/net/vhost-pci-net.c
@@ -117,12 +117,6 @@ static void vpnet_send_ctrlq_msg_remoteq(VhostPCINet *vpnet)
static void vpnet_set_status(struct VirtIODevice *vdev, uint8_t status)
{
VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
- uint16_t vq_num = vpnet->vq_pairs * 2;
- BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
- VirtioBusState *vbus = VIRTIO_BUS(qbus);
- VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
- VirtQueue *vq;
- int r, i;
/* Send the ctrlq messages to the driver when the ctrlq is ready */
if (status & VIRTIO_CONFIG_S_DRIVER_OK) {
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index f9b7244..b60e683 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -2367,6 +2367,59 @@ static const TypeInfo virtio_net_pci_info = {
.class_init = virtio_net_pci_class_init,
};
+/* vhost-pci-net */
+
+static Property vpnet_pci_properties[] = {
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+#define REMOTE_MEM_BAR_ID 2
+static void vpnet_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
+{
+ VhostPCINetPCI *dev = VHOST_PCI_NET_PCI(vpci_dev);
+ DeviceState *vdev = DEVICE(&dev->vdev);
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
+
+ qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+
+ pci_register_bar(&vpci_dev->pci_dev, REMOTE_MEM_BAR_ID,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ vp_dev->bar_mr);
+ object_property_set_bool(OBJECT(vdev), true, "realized", errp);
+}
+
+static void vpnet_pci_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+ VirtioPCIClass *vpciklass = VIRTIO_PCI_CLASS(klass);
+
+ k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
+ k->device_id = PCI_DEVICE_ID_VHOST_PCI_NET;
+ k->class_id = PCI_CLASS_NETWORK_ETHERNET;
+ set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
+ dc->props = vpnet_pci_properties;
+ vpciklass->realize = vpnet_pci_realize;
+}
+
+static void vpnet_pci_instance_init(Object *obj)
+{
+ VhostPCINetPCI *dev = VHOST_PCI_NET_PCI(obj);
+
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+ TYPE_VHOST_PCI_NET);
+}
+
+static const TypeInfo vpnet_pci_info = {
+ .name = TYPE_VHOST_PCI_NET_PCI,
+ .parent = TYPE_VIRTIO_PCI,
+ .instance_size = sizeof(VhostPCINetPCI),
+ .instance_init = vpnet_pci_instance_init,
+ .class_init = vpnet_pci_class_init,
+};
+
/* virtio-rng-pci */
static void virtio_rng_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
@@ -2596,6 +2649,7 @@ static void virtio_pci_register_types(void)
type_register_static(&virtio_keyboard_pci_info);
type_register_static(&virtio_mouse_pci_info);
type_register_static(&virtio_tablet_pci_info);
+ type_register_static(&vpnet_pci_info);
#ifdef CONFIG_LINUX
type_register_static(&virtio_host_pci_info);
#endif
diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
index b095dfc..6ffacd9 100644
--- a/hw/virtio/virtio-pci.h
+++ b/hw/virtio/virtio-pci.h
@@ -18,6 +18,7 @@
#include "hw/pci/msi.h"
#include "hw/virtio/virtio-blk.h"
#include "hw/virtio/virtio-net.h"
+#include "hw/virtio/vhost-pci-net.h"
#include "hw/virtio/virtio-rng.h"
#include "hw/virtio/virtio-serial.h"
#include "hw/virtio/virtio-scsi.h"
@@ -43,6 +44,7 @@ typedef struct VirtIOSCSIPCI VirtIOSCSIPCI;
typedef struct VirtIOBalloonPCI VirtIOBalloonPCI;
typedef struct VirtIOSerialPCI VirtIOSerialPCI;
typedef struct VirtIONetPCI VirtIONetPCI;
+typedef struct VhostPCINetPCI VhostPCINetPCI;
typedef struct VHostSCSIPCI VHostSCSIPCI;
typedef struct VirtIORngPCI VirtIORngPCI;
typedef struct VirtIOInputPCI VirtIOInputPCI;
@@ -278,6 +280,18 @@ struct VirtIONetPCI {
VirtIONet vdev;
};
+ /*
+ * vhost-pci-net-pci: This extends VirtioPCIProxy.
+ */
+#define TYPE_VHOST_PCI_NET_PCI "vhost-pci-net-pci"
+#define VHOST_PCI_NET_PCI(obj) \
+ OBJECT_CHECK(VhostPCINetPCI, (obj), TYPE_VHOST_PCI_NET_PCI)
+
+struct VhostPCINetPCI {
+ VirtIOPCIProxy parent_obj;
+ VhostPCINet vdev;
+};
+
/*
* virtio-9p-pci: This extends VirtioPCIProxy.
*/
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index a37a2d5..63903d6 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -83,6 +83,7 @@
#define PCI_DEVICE_ID_VIRTIO_RNG 0x1005
#define PCI_DEVICE_ID_VIRTIO_9P 0x1009
#define PCI_DEVICE_ID_VIRTIO_VSOCK 0x1012
+#define PCI_DEVICE_ID_VHOST_PCI_NET 0x1014
#define PCI_VENDOR_ID_REDHAT 0x1b36
#define PCI_DEVICE_ID_REDHAT_BRIDGE 0x0001
diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
index 6d5c3b2..333bbd1 100644
--- a/include/standard-headers/linux/virtio_ids.h
+++ b/include/standard-headers/linux/virtio_ids.h
@@ -43,5 +43,6 @@
#define VIRTIO_ID_INPUT 18 /* virtio input */
#define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */
#define VIRTIO_ID_CRYPTO 20 /* virtio crypto */
+#define VIRTIO_ID_VHOST_PCI_NET 21 /* vhost-pci-net */
#endif /* _LINUX_VIRTIO_IDS_H */
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (4 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-15 0:21 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
` (11 subsequent siblings)
17 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
This patch enables the assign of an already allocated eventfd to a notifier.
In this case, QEMU creates a new eventfd for the notifier only when the
notifier's fd equals to -1. Otherwise, it means that the notifier has been
assigned a vaild fd.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost-pci-net.c | 73 +++++++++++++++++++++++++++++++++++++++
hw/virtio/virtio-bus.c | 19 +++++++---
hw/virtio/virtio-pci.c | 22 ++++++++++--
hw/virtio/virtio.c | 32 ++++++++++++++---
include/hw/virtio/vhost-pci-net.h | 6 ++++
include/hw/virtio/virtio.h | 2 ++
6 files changed, 141 insertions(+), 13 deletions(-)
diff --git a/hw/net/vhost-pci-net.c b/hw/net/vhost-pci-net.c
index e36803a..0235511 100644
--- a/hw/net/vhost-pci-net.c
+++ b/hw/net/vhost-pci-net.c
@@ -18,6 +18,7 @@
#include "qemu/error-report.h"
#include "hw/virtio/virtio-access.h"
#include "hw/virtio/vhost-pci-net.h"
+#include "hw/virtio/virtio-bus.h"
#define VPNET_CTRLQ_SIZE 32
#define VPNET_VQ_SIZE 256
@@ -114,12 +115,53 @@ static void vpnet_send_ctrlq_msg_remoteq(VhostPCINet *vpnet)
g_free(msg);
}
+static inline bool vq_is_txq(uint16_t id)
+{
+ return (id % 2 == 0);
+}
+
+static inline uint16_t tx2rx(uint16_t id)
+{
+ return id + 1;
+}
+
+static inline uint16_t rx2tx(uint16_t id)
+{
+ return id - 1;
+}
+
static void vpnet_set_status(struct VirtIODevice *vdev, uint8_t status)
{
VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
+ uint16_t vq_num = vpnet->vq_pairs * 2;
+ BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
+ VirtioBusState *vbus = VIRTIO_BUS(qbus);
+ VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ VirtQueue *vq;
+ int r, i;
/* Send the ctrlq messages to the driver when the ctrlq is ready */
if (status & VIRTIO_CONFIG_S_DRIVER_OK) {
+ /*
+ * Set up the callfd when the driver is ready.
+ * Crosse share the eventfds from the remoteq.
+ * Use the tx remoteq's kickfd as the rx localq's callfd.
+ * Use the rx remoteq's kickfd as the tx localq's callfd.
+ */
+ for (i = 0; i < vq_num; i++) {
+ vq = virtio_get_queue(vdev, i);
+ if (vq_is_txq(i)) {
+ virtio_queue_set_guest_notifier(vq,
+ vpnet->remoteq_fds[tx2rx(i)].kickfd);
+ } else {
+ virtio_queue_set_guest_notifier(vq,
+ vpnet->remoteq_fds[rx2tx(i)].kickfd);
+ }
+ }
+ r = k->set_guest_notifiers(qbus->parent, vq_num, true);
+ if (r < 0) {
+ error_report("Error binding guest notifier: %d", -r);
+ }
vpnet_send_ctrlq_msg_remote_mem(vpnet);
vpnet_send_ctrlq_msg_remoteq(vpnet);
}
@@ -155,17 +197,29 @@ static void vpnet_set_config(VirtIODevice *vdev, const uint8_t *config)
{
}
+static void vpnet_copy_fds_from_vhostdev(VirtqueueFD *fds, Remoteq *remoteq)
+{
+ fds[remoteq->vring_num].callfd = remoteq->callfd;
+ fds[remoteq->vring_num].kickfd = remoteq->kickfd;
+}
+
static void vpnet_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
uint16_t i, vq_num;
VhostPCIDev *vp_dev = get_vhost_pci_dev();
+ Remoteq *remoteq;
vq_num = vp_dev->remoteq_num;
vpnet->vq_pairs = vq_num / 2;
virtio_init(vdev, "vhost-pci-net", VIRTIO_ID_VHOST_PCI_NET,
vpnet->config_size);
+ vpnet->remoteq_fds = g_malloc(sizeof(struct VirtqueueFD) *
+ vq_num);
+ QLIST_FOREACH(remoteq, &vp_dev->remoteq_list, node) {
+ vpnet_copy_fds_from_vhostdev(vpnet->remoteq_fds, remoteq);
+ }
/* Add local vqs */
for (i = 0; i < vq_num; i++) {
@@ -192,6 +246,25 @@ static void vpnet_device_unrealize(DeviceState *dev, Error **errp)
static void vpnet_reset(VirtIODevice *vdev)
{
+ VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
+ VirtQueue *vq;
+ uint16_t i, vq_num = vpnet->vq_pairs * 2;
+
+ for (i = 0; i < vq_num; i++) {
+ vq = virtio_get_queue(vdev, i);
+ /*
+ * Cross share the eventfds.
+ * Use the tx remoteq's callfd as the rx localq's kickfd.
+ * Use the rx remoteq's callfd as the tx localq's kickfd.
+ */
+ if (vq_is_txq(i)) {
+ virtio_queue_set_host_notifier(vq,
+ vpnet->remoteq_fds[tx2rx(i)].callfd);
+ } else {
+ virtio_queue_set_host_notifier(vq,
+ vpnet->remoteq_fds[rx2tx(i)].callfd);
+ }
+ }
}
static Property vpnet_properties[] = {
diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c
index 3042232..3cf0991 100644
--- a/hw/virtio/virtio-bus.c
+++ b/hw/virtio/virtio-bus.c
@@ -274,11 +274,20 @@ int virtio_bus_set_host_notifier(VirtioBusState *bus, int n, bool assign)
}
if (assign) {
- r = event_notifier_init(notifier, 1);
- if (r < 0) {
- error_report("%s: unable to init event notifier: %s (%d)",
- __func__, strerror(-r), r);
- return r;
+ if (notifier->wfd == -1) {
+ r = event_notifier_init(notifier, 1);
+ if (r < 0) {
+ error_report("%s: unable to init event notifier: %s (%d)",
+ __func__, strerror(-r), r);
+ return r;
+ }
+ } else {
+ r = event_notifier_set(notifier);
+ if (r < 0) {
+ error_report("%s: unable to set event notifier: %s (%d)",
+ __func__, strerror(-r), r);
+ return r;
+ }
}
r = k->ioeventfd_assign(proxy, notifier, n, true);
if (r < 0) {
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index b60e683..3f1a198 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -963,11 +963,24 @@ static int virtio_pci_set_guest_notifier(DeviceState *d, int n, bool assign,
VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
VirtQueue *vq = virtio_get_queue(vdev, n);
EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
+ int r = 0;
if (assign) {
- int r = event_notifier_init(notifier, 0);
- if (r < 0) {
- return r;
+ if (notifier->wfd == -1) {
+ r = event_notifier_init(notifier, 0);
+ if (r < 0) {
+ error_report("%s: unable to init event notifier: %s (%d)",
+ __func__, strerror(-r), r);
+ return r;
+
+ }
+ } else {
+ r = event_notifier_set(notifier);
+ if (r < 0) {
+ error_report("%s: unable to set event notifier: %s (%d)",
+ __func__, strerror(-r), r);
+ return r;
+ }
}
virtio_queue_set_guest_notifier_fd_handler(vq, true, with_irqfd);
} else {
@@ -2370,6 +2383,9 @@ static const TypeInfo virtio_net_pci_info = {
/* vhost-pci-net */
static Property vpnet_pci_properties[] = {
+ DEFINE_PROP_BIT("ioeventfd", VirtIOPCIProxy, flags,
+ VIRTIO_PCI_FLAG_USE_IOEVENTFD_BIT, true),
+ DEFINE_PROP_UINT32("vectors", VirtIOPCIProxy, nvectors, 4),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 03592c5..43c7273 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1196,10 +1196,6 @@ void virtio_reset(void *opaque)
vdev->device_endian = virtio_default_endian();
}
- if (k->reset) {
- k->reset(vdev);
- }
-
vdev->broken = false;
vdev->guest_features = 0;
vdev->queue_sel = 0;
@@ -1222,6 +1218,14 @@ void virtio_reset(void *opaque)
vdev->vq[i].vring.num = vdev->vq[i].vring.num_default;
vdev->vq[i].inuse = 0;
virtio_virtqueue_reset_region_cache(&vdev->vq[i]);
+ vdev->vq[i].host_notifier.rfd = -1;
+ vdev->vq[i].host_notifier.wfd = -1;
+ vdev->vq[i].guest_notifier.rfd = -1;
+ vdev->vq[i].guest_notifier.wfd = -1;
+ }
+
+ if (k->reset) {
+ k->reset(vdev);
}
}
@@ -2253,7 +2257,11 @@ void virtio_init(VirtIODevice *vdev, const char *name,
vdev->vq[i].vector = VIRTIO_NO_VECTOR;
vdev->vq[i].vdev = vdev;
vdev->vq[i].queue_index = i;
- }
+ vdev->vq[i].host_notifier.rfd = -1;
+ vdev->vq[i].host_notifier.wfd = -1;
+ vdev->vq[i].guest_notifier.rfd = -1;
+ vdev->vq[i].guest_notifier.wfd = -1;
+ }
vdev->name = name;
vdev->config_len = config_size;
@@ -2364,6 +2372,13 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq)
return &vq->guest_notifier;
}
+void virtio_queue_set_guest_notifier(VirtQueue *vq, int fd)
+{
+ EventNotifier *e = &vq->guest_notifier;
+ e->rfd = fd;
+ e->wfd = fd;
+}
+
static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
{
VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
@@ -2437,6 +2452,13 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
return &vq->host_notifier;
}
+void virtio_queue_set_host_notifier(VirtQueue *vq, int fd)
+{
+ EventNotifier *e = &vq->host_notifier;
+ e->rfd = fd;
+ e->wfd = fd;
+}
+
void virtio_device_set_child_bus_name(VirtIODevice *vdev, char *bus_name)
{
g_free(vdev->bus_name);
diff --git a/include/hw/virtio/vhost-pci-net.h b/include/hw/virtio/vhost-pci-net.h
index e3a1c8b..9776260 100644
--- a/include/hw/virtio/vhost-pci-net.h
+++ b/include/hw/virtio/vhost-pci-net.h
@@ -22,6 +22,11 @@
#define VHOST_PCI_NET(obj) \
OBJECT_CHECK(VhostPCINet, (obj), TYPE_VHOST_PCI_NET)
+typedef struct VirtqueueFD {
+ int kickfd;
+ int callfd;
+} VirtqueueFD;
+
typedef struct VhostPCINet {
VirtIODevice parent_obj;
VirtQueue *ctrlq;
@@ -29,6 +34,7 @@ typedef struct VhostPCINet {
uint16_t vq_pairs;
size_t config_size;
uint64_t device_features;
+ VirtqueueFD *remoteq_fds;
} VhostPCINet;
#endif
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 7b6edba..423b466 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -276,6 +276,7 @@ void virtio_queue_update_used_idx(VirtIODevice *vdev, int n);
VirtQueue *virtio_get_queue(VirtIODevice *vdev, int n);
uint16_t virtio_get_queue_index(VirtQueue *vq);
EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq);
+void virtio_queue_set_guest_notifier(VirtQueue *vq, int fd);
void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign,
bool with_irqfd);
int virtio_device_start_ioeventfd(VirtIODevice *vdev);
@@ -284,6 +285,7 @@ int virtio_device_grab_ioeventfd(VirtIODevice *vdev);
void virtio_device_release_ioeventfd(VirtIODevice *vdev);
bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev);
EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
+void virtio_queue_set_host_notifier(VirtQueue *vq, int fd);
void virtio_queue_host_notifier_read(EventNotifier *n);
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
VirtIOHandleAIOOutput handle_output);
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (5 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
` (10 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Send virtio device id to the slave to indicate the device type.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost_net.c | 1 +
hw/virtio/vhost-user.c | 20 ++++++++++++++++++++
include/hw/virtio/vhost.h | 1 +
3 files changed, 22 insertions(+)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 22874a9..ea9879f 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -155,6 +155,7 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
net->dev.max_queues = 1;
net->dev.nvqs = 2;
net->dev.vqs = net->vqs;
+ net->dev.dev_type = VIRTIO_ID_NET;
if (backend_kernel) {
r = vhost_net_get_fd(options->net_backend);
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index d161884..1eba5e5 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -453,6 +453,18 @@ static int vhost_user_get_features(struct vhost_dev *dev, uint64_t *features)
return vhost_user_get_u64(dev, VHOST_USER_GET_FEATURES, features);
}
+static int vhost_user_set_dev_id(struct vhost_dev *dev, uint16_t virtio_id)
+{
+ VhostUserMsg msg = {
+ .request = VHOST_USER_SET_DEVICE_ID,
+ .flags = VHOST_USER_VERSION,
+ .payload.u64 = virtio_id,
+ .size = sizeof(msg.payload.u64),
+ };
+
+ return vhost_user_write(dev, &msg, NULL, 0);
+}
+
static int vhost_user_set_owner(struct vhost_dev *dev)
{
VhostUserMsg msg = {
@@ -510,6 +522,14 @@ static int vhost_user_init(struct vhost_dev *dev, void *opaque)
return err;
}
+ if (dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_SET_DEVICE_ID)) {
+ err = vhost_user_set_dev_id(dev, dev->dev_type);
+ if (err < 0) {
+ return err;
+ }
+ }
+
/* query the max queues we support if backend supports Multiple Queue */
if (dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ)) {
err = vhost_user_get_u64(dev, VHOST_USER_GET_QUEUE_NUM,
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index a450321..40ba87e 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -55,6 +55,7 @@ struct vhost_dev {
int n_mem_sections;
MemoryRegionSection *mem_sections;
struct vhost_virtqueue *vqs;
+ uint16_t dev_type;
int nvqs;
/* the first virtqueue which would be used by this vhost dev */
int vq_index;
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues to the slave
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (6 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
` (9 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
In the vhost-pci case, the slave needs the master side guest physical
address, rather than the qemu virtual address.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/vhost.c | 63 ++++++++++++++++++++++++++++++++---------------
include/hw/virtio/vhost.h | 2 ++
2 files changed, 45 insertions(+), 20 deletions(-)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 613494d..1ce7b92 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -25,6 +25,7 @@
#include "exec/address-spaces.h"
#include "hw/virtio/virtio-bus.h"
#include "hw/virtio/virtio-access.h"
+#include "hw/virtio/vhost-user.h"
#include "migration/migration.h"
#include "sysemu/dma.h"
@@ -994,6 +995,12 @@ out:
rcu_read_unlock();
}
+bool vhost_pci_enabled(struct vhost_dev *dev)
+{
+ return ((dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_VHOST_PCI)) != 0);
+}
+
static int vhost_virtqueue_start(struct vhost_dev *dev,
struct VirtIODevice *vdev,
struct vhost_virtqueue *vq,
@@ -1037,26 +1044,38 @@ static int vhost_virtqueue_start(struct vhost_dev *dev,
}
}
- vq->desc_size = s = l = virtio_queue_get_desc_size(vdev, idx);
vq->desc_phys = a = virtio_queue_get_desc_addr(vdev, idx);
- vq->desc = vhost_memory_map(dev, a, &l, 0);
- if (!vq->desc || l != s) {
- r = -ENOMEM;
- goto fail_alloc_desc;
+ if (vhost_pci_enabled(dev)) {
+ vq->desc = (void *)a;
+ } else {
+ vq->desc_size = s = l = virtio_queue_get_desc_size(vdev, idx);
+ vq->desc = cpu_physical_memory_map(a, &l, 0);
+ if (!vq->desc || l != s) {
+ r = -ENOMEM;
+ goto fail_alloc_desc;
+ }
}
vq->avail_size = s = l = virtio_queue_get_avail_size(vdev, idx);
vq->avail_phys = a = virtio_queue_get_avail_addr(vdev, idx);
- vq->avail = vhost_memory_map(dev, a, &l, 0);
- if (!vq->avail || l != s) {
- r = -ENOMEM;
- goto fail_alloc_avail;
+ if (vhost_pci_enabled(dev)) {
+ vq->avail = (void *)a;
+ } else {
+ vq->avail = cpu_physical_memory_map(a, &l, 0);
+ if (!vq->avail || l != s) {
+ r = -ENOMEM;
+ goto fail_alloc_avail;
+ }
}
vq->used_size = s = l = virtio_queue_get_used_size(vdev, idx);
vq->used_phys = a = virtio_queue_get_used_addr(vdev, idx);
- vq->used = vhost_memory_map(dev, a, &l, 1);
- if (!vq->used || l != s) {
- r = -ENOMEM;
- goto fail_alloc_used;
+ if (vhost_pci_enabled(dev)) {
+ vq->used = (void *)a;
+ } else {
+ vq->used = cpu_physical_memory_map(a, &l, 1);
+ if (!vq->used || l != s) {
+ r = -ENOMEM;
+ goto fail_alloc_used;
+ }
}
r = vhost_virtqueue_set_addr(dev, vq, vhost_vq_index, dev->log_enabled);
@@ -1139,13 +1158,17 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
!virtio_is_big_endian(vdev),
vhost_vq_index);
}
-
- vhost_memory_unmap(dev, vq->used, virtio_queue_get_used_size(vdev, idx),
- 1, virtio_queue_get_used_size(vdev, idx));
- vhost_memory_unmap(dev, vq->avail, virtio_queue_get_avail_size(vdev, idx),
- 0, virtio_queue_get_avail_size(vdev, idx));
- vhost_memory_unmap(dev, vq->desc, virtio_queue_get_desc_size(vdev, idx),
- 0, virtio_queue_get_desc_size(vdev, idx));
+ if (!vhost_pci_enabled(dev)) {
+ cpu_physical_memory_unmap(vq->used,
+ virtio_queue_get_used_size(vdev, idx),
+ 1, virtio_queue_get_used_size(vdev, idx));
+ cpu_physical_memory_unmap(vq->avail,
+ virtio_queue_get_avail_size(vdev, idx),
+ 0, virtio_queue_get_avail_size(vdev, idx));
+ cpu_physical_memory_unmap(vq->desc,
+ virtio_queue_get_desc_size(vdev, idx),
+ 0, virtio_queue_get_desc_size(vdev, idx));
+ }
}
static void vhost_eventfd_add(MemoryListener *listener,
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 40ba87e..09e02d8 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -107,4 +107,6 @@ int vhost_net_set_backend(struct vhost_dev *hdev,
struct vhost_vring_file *file);
void vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write);
+
+bool vhost_pci_enabled(struct vhost_dev *dev);
#endif
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (7 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
` (8 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
The master requests the slave to create or destroy a vhost-pci device.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost_net.c | 36 ++++++++++++++++++++++++++++++++++++
hw/virtio/vhost-user.c | 17 +++++++++++++++++
include/hw/virtio/vhost-backend.h | 2 ++
include/net/vhost_net.h | 2 ++
4 files changed, 57 insertions(+)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ea9879f..0a5278d 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -18,6 +18,7 @@
#include "net/tap.h"
#include "net/vhost-user.h"
+#include "hw/virtio/vhost-user.h"
#include "hw/virtio/virtio-net.h"
#include "net/vhost_net.h"
#include "qemu/error-report.h"
@@ -296,6 +297,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ struct vhost_net *last_net;
int r, e, i;
if (!k->set_guest_notifiers) {
@@ -341,6 +343,15 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
}
}
+ last_net = get_vhost_net(ncs[total_queues - 1].peer);
+ if (vhost_pci_enabled(&last_net->dev)) {
+ r = vhost_set_vhost_pci(ncs[total_queues - 1].peer,
+ VHOST_USER_SET_VHOST_PCI_START);
+ if (r < 0) {
+ goto err_start;
+ }
+ }
+
return 0;
err_start:
@@ -362,8 +373,15 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ struct vhost_net *last_net;
int i, r;
+ last_net = get_vhost_net(ncs[total_queues - 1].peer);
+ if (vhost_pci_enabled(&last_net->dev)) {
+ vhost_set_vhost_pci(ncs[total_queues - 1].peer,
+ VHOST_USER_SET_VHOST_PCI_STOP);
+ }
+
for (i = 0; i < total_queues; i++) {
vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
}
@@ -450,6 +468,18 @@ int vhost_net_set_mtu(struct vhost_net *net, uint16_t mtu)
return vhost_ops->vhost_net_set_mtu(&net->dev, mtu);
}
+int vhost_set_vhost_pci(NetClientState *nc, uint8_t cmd)
+{
+ VHostNetState *net = get_vhost_net(nc);
+ const VhostOps *vhost_ops = net->dev.vhost_ops;
+
+ if (vhost_ops && vhost_ops->vhost_set_vhost_pci) {
+ return vhost_ops->vhost_set_vhost_pci(&net->dev, cmd);
+ }
+
+ return 0;
+}
+
#else
uint64_t vhost_net_get_max_queues(VHostNetState *net)
{
@@ -521,4 +551,10 @@ int vhost_net_set_mtu(struct vhost_net *net, uint16_t mtu)
{
return 0;
}
+
+int vhost_set_vhost_pci(NetClientState *nc, uint8_t cmd)
+{
+ return 0;
+}
+
#endif
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 1eba5e5..ca8fe36 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -313,6 +313,22 @@ static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
return 0;
}
+static int vhost_user_set_vhost_pci(struct vhost_dev *dev, uint8_t cmd)
+{
+ VhostUserMsg msg = {
+ .request = VHOST_USER_SET_VHOST_PCI,
+ .flags = VHOST_USER_VERSION,
+ .payload.u64 = (uint64_t)cmd,
+ .size = sizeof(msg.payload.u64),
+ };
+
+ if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
+ return -1;
+ }
+
+ return 0;
+}
+
static int vhost_user_get_vring_base(struct vhost_dev *dev,
struct vhost_vring_state *ring)
{
@@ -671,6 +687,7 @@ const VhostOps user_ops = {
.vhost_reset_device = vhost_user_reset_device,
.vhost_get_vq_index = vhost_user_get_vq_index,
.vhost_set_vring_enable = vhost_user_set_vring_enable,
+ .vhost_set_vhost_pci = vhost_user_set_vhost_pci,
.vhost_requires_shm_log = vhost_user_requires_shm_log,
.vhost_migration_done = vhost_user_migration_done,
.vhost_backend_can_merge = vhost_user_can_merge,
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index c3cf4a7..1c68f67 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -70,6 +70,7 @@ typedef int (*vhost_reset_device_op)(struct vhost_dev *dev);
typedef int (*vhost_get_vq_index_op)(struct vhost_dev *dev, int idx);
typedef int (*vhost_set_vring_enable_op)(struct vhost_dev *dev,
int enable);
+typedef int (*vhost_set_vhost_pci_op)(struct vhost_dev *dev, uint8_t cmd);
typedef bool (*vhost_requires_shm_log_op)(struct vhost_dev *dev);
typedef int (*vhost_migration_done_op)(struct vhost_dev *dev,
char *mac_addr);
@@ -114,6 +115,7 @@ typedef struct VhostOps {
vhost_reset_device_op vhost_reset_device;
vhost_get_vq_index_op vhost_get_vq_index;
vhost_set_vring_enable_op vhost_set_vring_enable;
+ vhost_set_vhost_pci_op vhost_set_vhost_pci;
vhost_requires_shm_log_op vhost_requires_shm_log;
vhost_migration_done_op vhost_migration_done;
vhost_backend_can_merge_op vhost_backend_can_merge;
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index afc1499..3db5559 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -37,4 +37,6 @@ uint64_t vhost_net_get_acked_features(VHostNetState *net);
int vhost_net_set_mtu(struct vhost_net *net, uint16_t mtu);
+int vhost_set_vhost_pci(NetClientState *nc, uint8_t cmd);
+
#endif
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (8 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
` (7 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
The slave device actively sends the negotiated feature bits to
the master.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost-pci-net.c | 18 ++++++++++++++++++
hw/virtio/vhost-pci-slave.c | 22 ++++++++++++++++++++++
include/hw/virtio/vhost-pci-slave.h | 2 ++
3 files changed, 42 insertions(+)
diff --git a/hw/net/vhost-pci-net.c b/hw/net/vhost-pci-net.c
index 0235511..1379204 100644
--- a/hw/net/vhost-pci-net.c
+++ b/hw/net/vhost-pci-net.c
@@ -181,6 +181,24 @@ static uint64_t vpnet_get_features(VirtIODevice *vdev, uint64_t features,
static void vpnet_set_features(VirtIODevice *vdev, uint64_t features)
{
+ /*
+ * The implementation split the write of the 64-bit "features" into 2
+ * 32-bit writes, so the function is called twice. need_send is used to
+ * detect the second write which finishes the write of "features", and
+ * need to send to the remote device.
+ */
+ static bool need_send;
+ int ret;
+
+ if (need_send) {
+ need_send = 0;
+ ret = vp_slave_send_feature_bits(features);
+ if (ret < 0) {
+ error_report("%s failed to send feature bits", __func__);
+ }
+ } else {
+ need_send = 1;
+ }
}
static void vpnet_get_config(VirtIODevice *vdev, uint8_t *config)
diff --git a/hw/virtio/vhost-pci-slave.c b/hw/virtio/vhost-pci-slave.c
index ab1d06b..6cc9c21 100644
--- a/hw/virtio/vhost-pci-slave.c
+++ b/hw/virtio/vhost-pci-slave.c
@@ -122,6 +122,28 @@ static void vp_slave_set_features(VhostUserMsg *msg)
~(1 << VHOST_USER_F_PROTOCOL_FEATURES);
}
+static int vp_slave_send_u64(int request, uint64_t u64)
+{
+ VhostUserMsg msg = {
+ .request = request,
+ .flags = VHOST_USER_VERSION,
+ .payload.u64 = u64,
+ .size = sizeof(msg.payload.u64),
+ };
+
+ if (vp_slave_write(&vp_slave->chr_be, &msg) < 0) {
+ error_report("%s: failed to send", __func__);
+ return -1;
+ }
+
+ return 0;
+}
+
+int vp_slave_send_feature_bits(uint64_t features)
+{
+ return vp_slave_send_u64(VHOST_USER_SET_FEATURES, features);
+}
+
static void vp_slave_event(void *opaque, int event)
{
switch (event) {
diff --git a/include/hw/virtio/vhost-pci-slave.h b/include/hw/virtio/vhost-pci-slave.h
index b5bf02a..ab21e70 100644
--- a/include/hw/virtio/vhost-pci-slave.h
+++ b/include/hw/virtio/vhost-pci-slave.h
@@ -56,6 +56,8 @@ extern int vhost_pci_slave_init(QemuOpts *opts);
extern int vhost_pci_slave_cleanup(void);
+extern int vp_slave_send_feature_bits(uint64_t features);
+
VhostPCIDev *get_vhost_pci_dev(void);
#endif
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (9 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:51 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
` (6 subsequent siblings)
17 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Enable the vhost-user master to asynchronously receive messages
from the slave. The vhost_user_asyn_read and vhost_user_can_read
stub functions are defined for platforms that do not support the
use of virtio.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/Makefile.objs | 6 +++---
hw/virtio/vhost-stub.c | 11 +++++++++++
hw/virtio/vhost-user.c | 42 +++++++++++++++++++++++++++++++++++++++++-
include/hw/virtio/vhost-user.h | 4 ++++
include/net/vhost-user.h | 4 ++++
net/vhost-user.c | 23 ++++++++++++++++++++---
6 files changed, 83 insertions(+), 7 deletions(-)
diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 5e81f2f..59e826e 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -10,7 +10,7 @@ obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
obj-y += virtio-crypto.o
obj-$(CONFIG_VIRTIO_PCI) += virtio-crypto-pci.o
-endif
-
+else
common-obj-$(call lnot,$(CONFIG_LINUX)) += vhost-stub.o
-common-obj-$(CONFIG_ALL) += vhost-stub.o
+common-obj-y += vhost-stub.o
+endif
diff --git a/hw/virtio/vhost-stub.c b/hw/virtio/vhost-stub.c
index 2d76cde..e130791 100644
--- a/hw/virtio/vhost-stub.c
+++ b/hw/virtio/vhost-stub.c
@@ -1,7 +1,18 @@
#include "qemu/osdep.h"
#include "hw/virtio/vhost.h"
+#include "hw/virtio/vhost-user.h"
bool vhost_has_free_slot(void)
{
return true;
}
+
+void vhost_user_asyn_read(void *opaque, const uint8_t *buf, int size)
+{
+ return;
+}
+
+int vhost_user_can_read(void *opaque)
+{
+ return 0;
+}
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index ca8fe36..5d55ea1 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -14,7 +14,7 @@
#include "hw/virtio/vhost-backend.h"
#include "hw/virtio/vhost-user.h"
#include "hw/virtio/virtio-net.h"
-#include "sysemu/char.h"
+#include "net/vhost-user.h"
#include "sysemu/kvm.h"
#include "qemu/error-report.h"
#include "qemu/sockets.h"
@@ -75,6 +75,46 @@ fail:
return -1;
}
+int vhost_user_can_read(void *opaque)
+{
+ return VHOST_USER_HDR_SIZE;
+}
+
+void vhost_user_asyn_read(void *opaque, const uint8_t *buf, int size)
+{
+ const char *name = opaque;
+ VhostUserMsg msg;
+ uint8_t *p = (uint8_t *) &msg;
+ CharBackend *chr_be = net_name_to_chr_be(name);
+
+ if (size != VHOST_USER_HDR_SIZE) {
+ error_report("%s: wrong message size received %d", __func__, size);
+ return;
+ }
+
+ memcpy(p, buf, VHOST_USER_HDR_SIZE);
+
+ if (msg.size) {
+ p += VHOST_USER_HDR_SIZE;
+ size = qemu_chr_fe_read_all(chr_be, p, msg.size);
+ if (size != msg.size) {
+ error_report("%s: wrong message size %d != %d", __func__,
+ size, msg.size);
+ return;
+ }
+ }
+
+ if (msg.request > VHOST_USER_MAX) {
+ error_report("%s:incorrect msg %d", __func__, msg.request);
+ }
+
+ switch (msg.request) {
+ default:
+ error_report("%s: does not support msg %d", __func__, msg.request);
+ break;
+ }
+}
+
static int process_message_reply(struct vhost_dev *dev,
VhostUserRequest request)
{
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 1fccbe2..eae5431 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -103,4 +103,8 @@ static VhostUserMsg m __attribute__ ((unused));
/* The version of the protocol we support */
#define VHOST_USER_VERSION (0x1)
+int vhost_user_can_read(void *opaque);
+
+void vhost_user_asyn_read(void *opaque, const uint8_t *buf, int size);
+
#endif
diff --git a/include/net/vhost-user.h b/include/net/vhost-user.h
index d9e328d..1bb5f1a 100644
--- a/include/net/vhost-user.h
+++ b/include/net/vhost-user.h
@@ -11,8 +11,12 @@
#ifndef NET_VHOST_USER_H
#define NET_VHOST_USER_H
+#include "sysemu/char.h"
+
struct vhost_net;
struct vhost_net *vhost_user_get_vhost_net(NetClientState *nc);
uint64_t vhost_user_get_acked_features(NetClientState *nc);
+CharBackend *net_name_to_chr_be(const char *name);
+
#endif /* VHOST_USER_H */
diff --git a/net/vhost-user.c b/net/vhost-user.c
index e7e6340..91ee146 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -12,7 +12,7 @@
#include "clients.h"
#include "net/vhost_net.h"
#include "net/vhost-user.h"
-#include "sysemu/char.h"
+#include "hw/virtio/vhost-user.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qmp-commands.h"
@@ -221,6 +221,22 @@ static void chr_closed_bh(void *opaque)
}
}
+CharBackend *net_name_to_chr_be(const char *name)
+{
+ NetClientState *ncs[MAX_QUEUE_NUM];
+ VhostUserState *s;
+ int queues;
+
+ queues = qemu_find_net_clients_except(name, ncs,
+ NET_CLIENT_DRIVER_NIC,
+ MAX_QUEUE_NUM);
+ assert(queues < MAX_QUEUE_NUM);
+
+ s = DO_UPCAST(VhostUserState, nc, ncs[0]);
+
+ return &s->chr;
+}
+
static void net_vhost_user_event(void *opaque, int event)
{
const char *name = opaque;
@@ -307,8 +323,9 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
error_report_err(err);
return -1;
}
- qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
- net_vhost_user_event, nc0->name, NULL, true);
+ qemu_chr_fe_set_handlers(&s->chr, vhost_user_can_read,
+ vhost_user_asyn_read, net_vhost_user_event,
+ nc0->name, NULL, true);
} while (!s->started);
assert(s->vhost_net);
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (10 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
` (5 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
If the featuer bits sent by the slave are not equal to the ones that
were sent by the master, perform a reset of the master device.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost_net.c | 2 ++
hw/virtio/vhost-user.c | 24 ++++++++++++++++++++++++
hw/virtio/virtio-pci.c | 20 ++++++++++++++++++++
hw/virtio/virtio-pci.h | 2 ++
include/net/vhost-user.h | 14 ++++++++++++++
net/vhost-user.c | 14 +++++---------
6 files changed, 67 insertions(+), 9 deletions(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 0a5278d..7609083 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -352,6 +352,8 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
}
}
+ vhost_user_set_master_dev(ncs[0].peer, dev);
+
return 0;
err_start:
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 5d55ea1..1a34048 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -12,6 +12,7 @@
#include "qapi/error.h"
#include "hw/virtio/vhost.h"
#include "hw/virtio/vhost-backend.h"
+#include "hw/virtio/virtio-pci.h"
#include "hw/virtio/vhost-user.h"
#include "hw/virtio/virtio-net.h"
#include "net/vhost-user.h"
@@ -75,6 +76,26 @@ fail:
return -1;
}
+static void handle_slave_acked_features(const char *name, VhostUserMsg *msg)
+{
+ CharBackend *chr_be = net_name_to_chr_be(name);
+ VhostUserState *s = container_of(chr_be, VhostUserState, chr);
+ VirtIODevice *vdev = s->vdev;
+ uint64_t master_features, slave_features;
+
+ master_features = vhost_net_get_acked_features(s->vhost_net) &
+ ~(1 << VHOST_USER_F_PROTOCOL_FEATURES);
+ slave_features = msg->payload.u64;
+
+ /*
+ * It is a rare case: vhost-pci driver only accepted a subset of the
+ * feature bits. In this case, reset the virtio device.
+ */
+ if (master_features != slave_features) {
+ master_reset_virtio_net(vdev);
+ }
+}
+
int vhost_user_can_read(void *opaque)
{
return VHOST_USER_HDR_SIZE;
@@ -109,6 +130,9 @@ void vhost_user_asyn_read(void *opaque, const uint8_t *buf, int size)
}
switch (msg.request) {
+ case VHOST_USER_SET_FEATURES:
+ handle_slave_acked_features(name, &msg);
+ break;
default:
error_report("%s: does not support msg %d", __func__, msg.request);
break;
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 3f1a198..0677496 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -37,6 +37,7 @@
#include "qemu/range.h"
#include "hw/virtio/virtio-bus.h"
#include "qapi/visitor.h"
+#include "monitor/qdev.h"
#define VIRTIO_PCI_REGION_SIZE(dev) VIRTIO_PCI_CONFIG_OFF(msix_present(dev))
@@ -2327,6 +2328,25 @@ static const TypeInfo virtio_serial_pci_info = {
/* virtio-net-pci */
+void master_reset_virtio_net(VirtIODevice *vdev)
+{
+ VirtIONet *net = VIRTIO_NET(vdev);
+ VirtIONetPCI *net_pci = container_of(net, VirtIONetPCI, vdev);
+ VirtIOPCIProxy *proxy = &net_pci->parent_obj;
+ DeviceState *qdev = DEVICE(proxy);
+ DeviceState *qdev_new;
+ Error *err = NULL;
+
+ virtio_pci_reset(qdev);
+ qdev_unplug(qdev, &err);
+ qdev->realized = false;
+ qdev_new = qdev_device_add(qdev->opts, &err);
+ if (!qdev_new) {
+ qemu_opts_del(qdev->opts);
+ }
+ object_unref(OBJECT(qdev));
+}
+
static Property virtio_net_properties[] = {
DEFINE_PROP_BIT("ioeventfd", VirtIOPCIProxy, flags,
VIRTIO_PCI_FLAG_USE_IOEVENTFD_BIT, true),
diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
index 6ffacd9..fa8a671 100644
--- a/hw/virtio/virtio-pci.h
+++ b/hw/virtio/virtio-pci.h
@@ -399,4 +399,6 @@ struct VirtIOCryptoPCI {
/* Virtio ABI version, if we increment this, we break the guest driver. */
#define VIRTIO_PCI_ABI_VERSION 0
+void master_reset_virtio_net(VirtIODevice *vdev);
+
#endif
diff --git a/include/net/vhost-user.h b/include/net/vhost-user.h
index 1bb5f1a..4cd14c9 100644
--- a/include/net/vhost-user.h
+++ b/include/net/vhost-user.h
@@ -12,6 +12,18 @@
#define NET_VHOST_USER_H
#include "sysemu/char.h"
+#include "net/vhost_net.h"
+
+typedef struct VhostUserState {
+ NetClientState nc;
+ CharBackend chr; /* only queue index 0 */
+ VHostNetState *vhost_net;
+ guint watch;
+ uint64_t acked_features;
+ bool started;
+ /* Pointer to the master device */
+ VirtIODevice *vdev;
+} VhostUserState;
struct vhost_net;
struct vhost_net *vhost_user_get_vhost_net(NetClientState *nc);
@@ -19,4 +31,6 @@ uint64_t vhost_user_get_acked_features(NetClientState *nc);
CharBackend *net_name_to_chr_be(const char *name);
+void vhost_user_set_master_dev(NetClientState *nc, VirtIODevice *vdev);
+
#endif /* VHOST_USER_H */
diff --git a/net/vhost-user.c b/net/vhost-user.c
index 91ee146..7c7707a 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -10,7 +10,6 @@
#include "qemu/osdep.h"
#include "clients.h"
-#include "net/vhost_net.h"
#include "net/vhost-user.h"
#include "hw/virtio/vhost-user.h"
#include "qemu/config-file.h"
@@ -18,14 +17,11 @@
#include "qmp-commands.h"
#include "trace.h"
-typedef struct VhostUserState {
- NetClientState nc;
- CharBackend chr; /* only queue index 0 */
- VHostNetState *vhost_net;
- guint watch;
- uint64_t acked_features;
- bool started;
-} VhostUserState;
+void vhost_user_set_master_dev(NetClientState *nc, VirtIODevice *vdev)
+{
+ VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
+ s->vdev = vdev;
+}
VHostNetState *vhost_user_get_vhost_net(NetClientState *nc)
{
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio"
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (11 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
` (4 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
After the vhost-pci-net device being hotplugged to the VM, the device
uses the features bits that have been negotiated with the remote virtio
device to negotiate with the driver. If the driver accepts a subset of
the feature bits, it implies that the vhost-pci-net can only suppoort
a subset of the features supported by the remote virtio device. In this
case, the remote virtio_device will be reset, and re-start the vhost-user
protocol.
Add the "reset_virtio" field as an indicator to the slave in this case.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost-pci-net.c | 11 +++++++++++
hw/virtio/vhost-pci-slave.c | 15 +++++++++++++--
include/hw/virtio/vhost-pci-slave.h | 1 +
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/hw/net/vhost-pci-net.c b/hw/net/vhost-pci-net.c
index 1379204..a2dca50 100644
--- a/hw/net/vhost-pci-net.c
+++ b/hw/net/vhost-pci-net.c
@@ -189,8 +189,19 @@ static void vpnet_set_features(VirtIODevice *vdev, uint64_t features)
*/
static bool need_send;
int ret;
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
if (need_send) {
+ /*
+ * If the remote negotiated feature bits are not equal to the
+ * feature bits that have been negotiated between the device and
+ * driver, the remote virtio device needs a reset. Set reset_virtio
+ * to indicate to the slave about this case.
+ */
+ if (vp_dev->feature_bits != features) {
+ vp_dev->feature_bits = features;
+ vp_dev->reset_virtio = 1;
+ }
need_send = 0;
ret = vp_slave_send_feature_bits(features);
if (ret < 0) {
diff --git a/hw/virtio/vhost-pci-slave.c b/hw/virtio/vhost-pci-slave.c
index 6cc9c21..a7d3c8d 100644
--- a/hw/virtio/vhost-pci-slave.c
+++ b/hw/virtio/vhost-pci-slave.c
@@ -171,8 +171,15 @@ static void vp_slave_set_device_type(VhostUserMsg *msg)
switch (vp_dev->dev_type) {
case VIRTIO_ID_NET:
- vp_dev->feature_bits |= VHOST_PCI_FEATURE_BITS |
- VHOST_PCI_NET_FEATURE_BITS;
+ /*
+ * The setting of reset_virtio implies that the feature_bits has been
+ * remotely negotiated. So, skip adding the supported features to
+ * feature_bits in this case.
+ */
+ if (!vp_dev->reset_virtio) {
+ vp_dev->feature_bits |= VHOST_PCI_FEATURE_BITS |
+ VHOST_PCI_NET_FEATURE_BITS;
+ }
break;
default:
error_report("%s: device type %d is not supported",
@@ -400,6 +407,9 @@ static int vp_slave_set_vhost_pci(VhostUserMsg *msg)
switch (cmd) {
case VHOST_USER_SET_VHOST_PCI_START:
+ if (vp_dev->reset_virtio) {
+ vp_dev->reset_virtio = 0;
+ }
ret = vp_slave_device_create(vp_dev->dev_type);
if (ret < 0) {
return ret;
@@ -585,6 +595,7 @@ static void vp_dev_init(VhostPCIDev *vp_dev)
vp_dev->vdev = NULL;
QLIST_INIT(&vp_dev->remoteq_list);
vp_dev->remoteq_num = 0;
+ vp_dev->reset_virtio = 0;
}
int vhost_pci_slave_init(QemuOpts *opts)
diff --git a/include/hw/virtio/vhost-pci-slave.h b/include/hw/virtio/vhost-pci-slave.h
index ab21e70..594917f 100644
--- a/include/hw/virtio/vhost-pci-slave.h
+++ b/include/hw/virtio/vhost-pci-slave.h
@@ -29,6 +29,7 @@ typedef struct RemoteMem {
typedef struct VhostPCIDev {
/* Ponnter to the slave device */
VirtIODevice *vdev;
+ bool reset_virtio;
uint16_t dev_type;
uint64_t feature_bits;
/* Records the end (offset to the BAR) of the last mapped region */
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (12 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
` (3 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/vhost-pci-slave.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/hw/virtio/vhost-pci-slave.c b/hw/virtio/vhost-pci-slave.c
index a7d3c8d..cde122c 100644
--- a/hw/virtio/vhost-pci-slave.c
+++ b/hw/virtio/vhost-pci-slave.c
@@ -139,6 +139,42 @@ static int vp_slave_send_u64(int request, uint64_t u64)
return 0;
}
+static DeviceState *virtio_to_pci_dev(VirtIODevice *vdev, uint16_t virtio_id)
+{
+ DeviceState *qdev = NULL;
+ VhostPCINet *vpnet;
+ VhostPCINetPCI *netpci;
+
+ if (!vdev) {
+ return NULL;
+ }
+
+ switch (virtio_id) {
+ case VIRTIO_ID_NET:
+ vpnet = VHOST_PCI_NET(vdev);
+ netpci = container_of(vpnet, VhostPCINetPCI, vdev);
+ qdev = &netpci->parent_obj.pci_dev.qdev;
+ break;
+ default:
+ error_report("virtio_to_pci_dev: device type %d not supported",
+ virtio_id);
+ }
+
+ return qdev;
+}
+
+static void vp_slave_device_del(VirtIODevice *vdev)
+{
+ Error *errp = NULL;
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+ DeviceState *qdev = virtio_to_pci_dev(vdev, vp_dev->dev_type);
+
+ if (qdev != NULL) {
+ qdev_unplug(qdev, &errp);
+ vp_dev_cleanup();
+ }
+}
+
int vp_slave_send_feature_bits(uint64_t features)
{
return vp_slave_send_u64(VHOST_USER_SET_FEATURES, features);
@@ -146,10 +182,13 @@ int vp_slave_send_feature_bits(uint64_t features)
static void vp_slave_event(void *opaque, int event)
{
+ VhostPCIDev *vp_dev = vp_slave->vp_dev;
+
switch (event) {
case CHR_EVENT_OPENED:
break;
case CHR_EVENT_CLOSED:
+ vp_slave_device_del(vp_dev->vdev);
break;
}
}
@@ -416,6 +455,8 @@ static int vp_slave_set_vhost_pci(VhostUserMsg *msg)
}
break;
case VHOST_USER_SET_VHOST_PCI_STOP:
+ vp_slave_device_del(vp_dev->vdev);
+ ret = 0;
break;
default:
error_report("%s: cmd %d not supported", __func__, cmd);
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (13 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
` (2 subsequent siblings)
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
If the remote device on the other side doesn't need to be reset,
set bit 0 of the device status register to allow the driver to
send out the packets.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/net/vhost-pci-net.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/hw/net/vhost-pci-net.c b/hw/net/vhost-pci-net.c
index a2dca50..513a7ff 100644
--- a/hw/net/vhost-pci-net.c
+++ b/hw/net/vhost-pci-net.c
@@ -130,6 +130,20 @@ static inline uint16_t rx2tx(uint16_t id)
return id - 1;
}
+static void vpnet_set_link_up(VhostPCINet *vpnet)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(vpnet);
+ uint16_t old_status = vpnet->status;
+
+ /*
+ * Set the LINK_UP status bit and notify the driver that it can send
+ * packets.
+ */
+ vpnet->status |= VPNET_S_LINK_UP;
+ if (vpnet->status != old_status)
+ virtio_notify_config(vdev);
+}
+
static void vpnet_set_status(struct VirtIODevice *vdev, uint8_t status)
{
VhostPCINet *vpnet = VHOST_PCI_NET(vdev);
@@ -137,6 +151,7 @@ static void vpnet_set_status(struct VirtIODevice *vdev, uint8_t status)
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ VhostPCIDev *vp_dev = get_vhost_pci_dev();
VirtQueue *vq;
int r, i;
@@ -164,6 +179,11 @@ static void vpnet_set_status(struct VirtIODevice *vdev, uint8_t status)
}
vpnet_send_ctrlq_msg_remote_mem(vpnet);
vpnet_send_ctrlq_msg_remoteq(vpnet);
+ /* If the peer device is not reset, start the device now */
+ if (!vp_dev->reset_virtio) {
+ vdev->status = status;
+ vpnet_set_link_up(vpnet);
+ }
}
}
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (14 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
@ 2017-05-12 8:35 ` Wei Wang
2017-05-12 9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
2017-05-16 6:46 ` Jason Wang
17 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:35 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
Cc: Wei Wang
Enable the use of vhost-pci. The init and cleanup stub functions are
added for the platforms that do not support the use of virtio.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/vhost-stub.c | 11 +++++++++++
vl.c | 24 ++++++++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/hw/virtio/vhost-stub.c b/hw/virtio/vhost-stub.c
index e130791..bfb73df 100644
--- a/hw/virtio/vhost-stub.c
+++ b/hw/virtio/vhost-stub.c
@@ -1,6 +1,7 @@
#include "qemu/osdep.h"
#include "hw/virtio/vhost.h"
#include "hw/virtio/vhost-user.h"
+#include "hw/virtio/vhost-pci-slave.h"
bool vhost_has_free_slot(void)
{
@@ -16,3 +17,13 @@ int vhost_user_can_read(void *opaque)
{
return 0;
}
+
+int vhost_pci_slave_init(QemuOpts *opt)
+{
+ return -1;
+}
+
+int vhost_pci_slave_cleanup(void)
+{
+ return -1;
+}
diff --git a/vl.c b/vl.c
index 2ee4713..18102d7 100644
--- a/vl.c
+++ b/vl.c
@@ -129,6 +129,7 @@ int main(int argc, char **argv)
#include "sysemu/replay.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/iothread.h"
+#include "hw/virtio/vhost-pci-slave.h"
#define MAX_VIRTIO_CONSOLES 1
#define MAX_SCLP_CONSOLES 1
@@ -187,6 +188,7 @@ uint8_t *boot_splash_filedata;
size_t boot_splash_filedata_size;
uint8_t qemu_extra_params_fw[2];
int only_migratable; /* turn it off unless user states otherwise */
+bool vhost_pci_slave_enabled;
int icount_align_option;
@@ -4060,6 +4062,7 @@ int main(int argc, char **argv, char **envp)
if (!opts) {
exit(1);
}
+ vhost_pci_slave_enabled = true;
break;
default:
os_parse_cmd_args(popt->index, optarg);
@@ -4591,6 +4594,18 @@ int main(int argc, char **argv, char **envp)
exit(1);
}
+ /* check if the vhost-pci-server is enabled */
+ if (vhost_pci_slave_enabled) {
+ int ret;
+ ret = vhost_pci_slave_init(qemu_opts_find(
+ qemu_find_opts("vhost-pci-slave"),
+ NULL));
+ if (ret < 0) {
+ error_report("vhost-pci-slave init failed");
+ exit(1);
+ }
+ }
+
/* init USB devices */
if (machine_usb(current_machine)) {
if (foreach_device_config(DEV_USB, usb_parse) < 0)
@@ -4736,6 +4751,15 @@ int main(int argc, char **argv, char **envp)
pause_all_vcpus();
res_free();
+ if (vhost_pci_slave_enabled) {
+ int ret;
+ ret = vhost_pci_slave_cleanup();
+ if (ret < 0) {
+ error_report("vhost-pci-slave init failed");
+ exit(1);
+ }
+ }
+
/* vhost-user must be cleaned up before chardevs. */
net_cleanup();
audio_cleanup();
--
2.7.4
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
@ 2017-05-12 8:51 ` Wei Wang
0 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-12 8:51 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
On 05/12/2017 04:35 PM, Wei Wang wrote:
> Enable the vhost-user master to asynchronously receive messages
> from the slave. The vhost_user_asyn_read and vhost_user_can_read
> stub functions are defined for platforms that do not support the
> use of virtio.
>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
After the secondary channel based solution is merged:
https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg02467.html
, we can switch to use that.
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (15 preceding siblings ...)
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
@ 2017-05-12 9:30 ` no-reply
2017-05-16 15:21 ` Michael S. Tsirkin
2017-05-16 6:46 ` Jason Wang
17 siblings, 1 reply; 52+ messages in thread
From: no-reply @ 2017-05-12 9:30 UTC (permalink / raw)
To: wei.w.wang
Cc: famz, stefanha, marcandre.lureau, mst, jasowang, pbonzini,
virtio-dev, qemu-devel
Hi,
This series failed automatic build test. Please find the testing commands and
their output below. If you have docker installed, you can probably reproduce it
locally.
Message-id: 1494578148-102868-1-git-send-email-wei.w.wang@intel.com
Subject: [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication
Type: series
=== TEST SCRIPT BEGIN ===
#!/bin/bash
set -e
git submodule update --init dtc
# Let docker tests dump environment info
export SHOW_ENV=1
export J=8
time make docker-test-quick@centos6
time make docker-test-mingw@fedora
time make docker-test-build@min-glib
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
e87cf19 vl: enable vhost-pci-slave
f371588 vhost-pci-net: tell the driver that it is ready to send packets
dc700db vhost-pci-slave: add support to delete a vhost-pci device
fcf818d vhost-pci-slave: add "reset_virtio"
faadde4 vhost-user: handling VHOST_USER_SET_FEATURES
882cf74 vhost-user: add asynchronous read for the vhost-user master
d21ccc1 vhost-pci-net: send the negotiated feature bits to the master
bdfcf9d vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP
3cc332a vhost-user: send guest physical address of virtqueues to the slave
1ada601 vhost-user: send device id to the slave
933a544 virtio: add inter-vm notification support
ec22110 vhost-pci-net-pci: add vhost-pci-net-pci
8ab7fd8 vhost-pci-net: add vhost-pci-net
bb66d67 vhost-pci-slave: create a vhost-user slave to support vhost-pci
6c08d0d vl: add the vhost-pci-slave command line option
130a927 vhost-user: share the vhost-user protocol related structures
=== OUTPUT BEGIN ===
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-9tacbi6p/src/dtc'...
Submodule path 'dtc': checked out '558cd81bdd432769b59bff01240c44f82cfb1a9d'
BUILD centos6
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-9tacbi6p/src'
ARCHIVE qemu.tgz
ARCHIVE dtc.tgz
COPY RUNNER
RUN test-quick in qemu:centos6
Packages installed:
SDL-devel-1.2.14-7.el6_7.1.x86_64
ccache-3.1.6-2.el6.x86_64
epel-release-6-8.noarch
gcc-4.4.7-17.el6.x86_64
git-1.7.1-4.el6_7.1.x86_64
glib2-devel-2.28.8-5.el6.x86_64
libfdt-devel-1.4.0-1.el6.x86_64
make-3.81-23.el6.x86_64
package g++ is not installed
pixman-devel-0.32.8-1.el6.x86_64
tar-1.23-15.el6_8.x86_64
zlib-devel-1.2.3-29.el6.x86_64
Environment variables:
PACKAGES=libfdt-devel ccache tar git make gcc g++ zlib-devel glib2-devel SDL-devel pixman-devel epel-release
HOSTNAME=bc5a9e0a34cb
TERM=xterm
MAKEFLAGS= -j8
HISTSIZE=1000
J=8
USER=root
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
MAIL=/var/spool/mail/root
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
LANG=en_US.UTF-8
TARGET_LIST=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
TEST_DIR=/tmp/qemu-test
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
FEATURES= dtc
DEBUG=
G_BROKEN_FILENAMES=1
CCACHE_HASHDIR=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/var/tmp/qemu-build/install
No C++ compiler available; disabling C++ specific optional code
Install prefix /var/tmp/qemu-build/install
BIOS directory /var/tmp/qemu-build/install/share/qemu
binary directory /var/tmp/qemu-build/install/bin
library directory /var/tmp/qemu-build/install/lib
module directory /var/tmp/qemu-build/install/lib/qemu
libexec directory /var/tmp/qemu-build/install/libexec
include directory /var/tmp/qemu-build/install/include
config directory /var/tmp/qemu-build/install/etc
local state directory /var/tmp/qemu-build/install/var
Manual directory /var/tmp/qemu-build/install/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path /tmp/qemu-test/src
C compiler cc
Host C compiler cc
C++ compiler
Objective-C compiler cc
ARFLAGS rv
CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
QEMU_CFLAGS -I/usr/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wendif-labels -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-all
LDFLAGS -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
make make
install install
python python -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
tcg debug enabled no
gprof enabled no
sparse enabled no
strip binaries yes
profiler no
static build no
pixman system
SDL support yes (1.2.14)
GTK support no
GTK GL support no
VTE support no
TLS priority NORMAL
GNUTLS support no
GNUTLS rnd no
libgcrypt no
libgcrypt kdf no
nettle no
nettle kdf no
libtasn1 no
curses support no
virgl support no
curl support no
mingw32 support no
Audio drivers oss
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
VNC support yes
VNC SASL support no
VNC JPEG support no
VNC PNG support no
xen support no
brlapi support no
bluez support no
Documentation no
PIE yes
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support yes
Install blobs yes
KVM support yes
HAX support no
RDMA support no
TCG interpreter no
fdt support yes
preadv support yes
fdatasync yes
madvise yes
posix_madvise yes
libcap-ng support no
vhost-net support yes
vhost-scsi support yes
vhost-vsock support yes
Trace backends log
spice support no
rbd support no
xfsctl support no
smartcard support no
libusb no
usb net redir no
OpenGL support no
OpenGL dmabufs no
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info no
QGA MSI support no
seccomp support no
coroutine backend ucontext
coroutine pool yes
debug stack usage no
GlusterFS support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support no
TPM passthrough yes
QOM debugging yes
lzo support no
snappy support no
bzip2 support no
NUMA host support no
tcmalloc support no
jemalloc support no
avx2 optimization no
replication support yes
VxHS block device no
GEN x86_64-softmmu/config-devices.mak.tmp
mkdir -p dtc/libfdt
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
mkdir -p dtc/tests
GEN qemu-options.def
GEN qmp-commands.h
GEN qapi-types.h
GEN qapi-visit.h
GEN qapi-event.h
GEN x86_64-softmmu/config-devices.mak
GEN aarch64-softmmu/config-devices.mak
GEN qmp-marshal.c
GEN qapi-types.c
GEN qapi-visit.c
GEN qapi-event.c
GEN qmp-introspect.h
GEN qmp-introspect.c
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
GEN trace/generated-helpers.c
GEN module_block.h
GEN tests/test-qapi-types.h
GEN tests/test-qapi-visit.h
GEN tests/test-qmp-commands.h
GEN tests/test-qapi-event.h
GEN tests/test-qmp-introspect.h
GEN trace-root.h
GEN util/trace.h
GEN crypto/trace.h
GEN io/trace.h
GEN migration/trace.h
GEN block/trace.h
GEN backends/trace.h
GEN hw/block/trace.h
GEN hw/block/dataplane/trace.h
GEN hw/char/trace.h
GEN hw/intc/trace.h
GEN hw/net/trace.h
GEN hw/virtio/trace.h
GEN hw/audio/trace.h
GEN hw/misc/trace.h
GEN hw/usb/trace.h
GEN hw/scsi/trace.h
GEN hw/nvram/trace.h
GEN hw/display/trace.h
GEN hw/input/trace.h
GEN hw/timer/trace.h
GEN hw/dma/trace.h
GEN hw/sparc/trace.h
GEN hw/sd/trace.h
GEN hw/isa/trace.h
GEN hw/mem/trace.h
GEN hw/i386/trace.h
GEN hw/i386/xen/trace.h
GEN hw/9pfs/trace.h
GEN hw/ppc/trace.h
GEN hw/pci/trace.h
GEN hw/s390x/trace.h
GEN hw/vfio/trace.h
GEN hw/acpi/trace.h
GEN hw/arm/trace.h
GEN hw/alpha/trace.h
GEN hw/xen/trace.h
GEN ui/trace.h
GEN audio/trace.h
GEN net/trace.h
GEN target/arm/trace.h
GEN target/i386/trace.h
GEN target/mips/trace.h
GEN target/sparc/trace.h
GEN target/s390x/trace.h
GEN target/ppc/trace.h
GEN qom/trace.h
GEN linux-user/trace.h
GEN trace-root.c
GEN qapi/trace.h
GEN util/trace.c
GEN crypto/trace.c
GEN io/trace.c
GEN migration/trace.c
GEN block/trace.c
GEN backends/trace.c
GEN hw/block/trace.c
GEN hw/block/dataplane/trace.c
GEN hw/char/trace.c
GEN hw/intc/trace.c
GEN hw/net/trace.c
GEN hw/virtio/trace.c
GEN hw/audio/trace.c
GEN hw/misc/trace.c
GEN hw/usb/trace.c
GEN hw/scsi/trace.c
GEN hw/nvram/trace.c
GEN hw/display/trace.c
GEN hw/input/trace.c
GEN hw/timer/trace.c
GEN hw/dma/trace.c
GEN hw/sparc/trace.c
GEN hw/sd/trace.c
GEN hw/isa/trace.c
GEN hw/mem/trace.c
GEN hw/i386/trace.c
GEN hw/i386/xen/trace.c
GEN hw/9pfs/trace.c
GEN hw/ppc/trace.c
GEN hw/pci/trace.c
GEN hw/s390x/trace.c
GEN hw/vfio/trace.c
GEN hw/acpi/trace.c
GEN hw/arm/trace.c
GEN hw/alpha/trace.c
GEN hw/xen/trace.c
GEN ui/trace.c
GEN audio/trace.c
GEN net/trace.c
GEN target/arm/trace.c
GEN target/i386/trace.c
GEN target/mips/trace.c
GEN target/sparc/trace.c
GEN target/s390x/trace.c
GEN target/ppc/trace.c
GEN qom/trace.c
GEN linux-user/trace.c
GEN qapi/trace.c
GEN config-all-devices.mak
DEP /tmp/qemu-test/src/dtc/tests/dumptrees.c
DEP /tmp/qemu-test/src/dtc/tests/trees.S
DEP /tmp/qemu-test/src/dtc/tests/testutils.c
DEP /tmp/qemu-test/src/dtc/tests/value-labels.c
DEP /tmp/qemu-test/src/dtc/tests/asm_tree_dump.c
DEP /tmp/qemu-test/src/dtc/tests/truncated_property.c
DEP /tmp/qemu-test/src/dtc/tests/check_path.c
DEP /tmp/qemu-test/src/dtc/tests/overlay_bad_fixup.c
DEP /tmp/qemu-test/src/dtc/tests/overlay.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/property_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/integer-expressions.c
DEP /tmp/qemu-test/src/dtc/tests/utilfdt_test.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset_aliases.c
DEP /tmp/qemu-test/src/dtc/tests/add_subnode_with_nops.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_unordered.c
DEP /tmp/qemu-test/src/dtc/tests/dtb_reverse.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_ordered.c
DEP /tmp/qemu-test/src/dtc/tests/extra-terminating-null.c
DEP /tmp/qemu-test/src/dtc/tests/incbin.c
DEP /tmp/qemu-test/src/dtc/tests/boot-cpuid.c
DEP /tmp/qemu-test/src/dtc/tests/phandle_format.c
DEP /tmp/qemu-test/src/dtc/tests/path-references.c
DEP /tmp/qemu-test/src/dtc/tests/references.c
DEP /tmp/qemu-test/src/dtc/tests/string_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/propname_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop2.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop1.c
DEP /tmp/qemu-test/src/dtc/tests/del_property.c
DEP /tmp/qemu-test/src/dtc/tests/del_node.c
DEP /tmp/qemu-test/src/dtc/tests/setprop.c
DEP /tmp/qemu-test/src/dtc/tests/set_name.c
DEP /tmp/qemu-test/src/dtc/tests/rw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/open_pack.c
DEP /tmp/qemu-test/src/dtc/tests/nopulate.c
DEP /tmp/qemu-test/src/dtc/tests/mangle-layout.c
DEP /tmp/qemu-test/src/dtc/tests/move_and_save.c
DEP /tmp/qemu-test/src/dtc/tests/sw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/nop_node.c
DEP /tmp/qemu-test/src/dtc/tests/nop_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop_inplace.c
DEP /tmp/qemu-test/src/dtc/tests/stringlist.c
DEP /tmp/qemu-test/src/dtc/tests/addr_size_cells.c
DEP /tmp/qemu-test/src/dtc/tests/notfound.c
DEP /tmp/qemu-test/src/dtc/tests/sized_cells.c
DEP /tmp/qemu-test/src/dtc/tests/char_literal.c
DEP /tmp/qemu-test/src/dtc/tests/get_alias.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_check_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_prop_value.c
DEP /tmp/qemu-test/src/dtc/tests/parent_offset.c
DEP /tmp/qemu-test/src/dtc/tests/supernode_atdepth_offset.c
DEP /tmp/qemu-test/src/dtc/tests/get_path.c
DEP /tmp/qemu-test/src/dtc/tests/get_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/getprop.c
DEP /tmp/qemu-test/src/dtc/tests/get_name.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_offset.c
DEP /tmp/qemu-test/src/dtc/tests/find_property.c
DEP /tmp/qemu-test/src/dtc/tests/root_node.c
DEP /tmp/qemu-test/src/dtc/tests/get_mem_rsv.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_overlay.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_addresses.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_empty_tree.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_strerror.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_rw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_sw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_wip.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_ro.c
DEP /tmp/qemu-test/src/dtc/util.c
DEP /tmp/qemu-test/src/dtc/fdtput.c
DEP /tmp/qemu-test/src/dtc/fdtget.c
DEP /tmp/qemu-test/src/dtc/fdtdump.c
LEX convert-dtsv0-lexer.lex.c
make[1]: flex: Command not found
DEP /tmp/qemu-test/src/dtc/srcpos.c
BISON dtc-parser.tab.c
make[1]: bison: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
DEP /tmp/qemu-test/src/dtc/treesource.c
DEP /tmp/qemu-test/src/dtc/fstree.c
DEP /tmp/qemu-test/src/dtc/livetree.c
DEP /tmp/qemu-test/src/dtc/flattree.c
DEP /tmp/qemu-test/src/dtc/dtc.c
DEP /tmp/qemu-test/src/dtc/data.c
DEP /tmp/qemu-test/src/dtc/checks.c
CHK version_gen.h
LEX convert-dtsv0-lexer.lex.c
BISON dtc-parser.tab.c
make[1]: flex: Command not found
make[1]: bison: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
UPD version_gen.h
DEP /tmp/qemu-test/src/dtc/util.c
BISON dtc-parser.tab.c
LEX convert-dtsv0-lexer.lex.c
make[1]: bison: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
make[1]: flex: Command not found
CC libfdt/fdt.o
CC libfdt/fdt_wip.o
CC libfdt/fdt_sw.o
CC libfdt/fdt_ro.o
CC libfdt/fdt_strerror.o
CC libfdt/fdt_empty_tree.o
CC libfdt/fdt_rw.o
CC libfdt/fdt_addresses.o
CC libfdt/fdt_overlay.o
AR libfdt/libfdt.a
ar: creating libfdt/libfdt.a
a - libfdt/fdt.o
a - libfdt/fdt_ro.o
a - libfdt/fdt_wip.o
a - libfdt/fdt_sw.o
a - libfdt/fdt_rw.o
a - libfdt/fdt_strerror.o
a - libfdt/fdt_empty_tree.o
a - libfdt/fdt_addresses.o
a - libfdt/fdt_overlay.o
LEX convert-dtsv0-lexer.lex.c
BISON dtc-parser.tab.c
make[1]: flex: Command not found
make[1]: bison: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
CC tests/qemu-iotests/socket_scm_helper.o
GEN qga/qapi-generated/qga-qapi-visit.h
GEN qga/qapi-generated/qga-qmp-commands.h
GEN qga/qapi-generated/qga-qapi-types.h
GEN qga/qapi-generated/qga-qapi-visit.c
GEN qga/qapi-generated/qga-qapi-types.c
GEN qga/qapi-generated/qga-qmp-marshal.c
CC qmp-introspect.o
CC qapi-types.o
CC qapi-visit.o
CC qapi-event.o
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qmp-registry.o
CC qapi/qmp-dispatch.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qint.o
CC qobject/qstring.o
CC qobject/qdict.o
CC qobject/qlist.o
CC qobject/qbool.o
CC qobject/qfloat.o
CC qobject/qjson.o
CC qobject/json-lexer.o
CC qobject/qobject.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/lockcnt.o
CC util/bufferiszero.o
CC util/aiocb.o
CC util/async.o
CC util/thread-pool.o
CC util/qemu-timer.o
CC util/main-loop.o
CC util/iohandler.o
CC util/aio-posix.o
CC util/compatfd.o
CC util/event_notifier-posix.o
CC util/mmap-alloc.o
CC util/oslib-posix.o
CC util/qemu-openpty.o
CC util/qemu-thread-posix.o
CC util/memfd.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/host-utils.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/uri.o
CC util/notify.o
CC util/qemu-option.o
CC util/qemu-progress.o
CC util/keyval.o
CC util/hexdump.o
CC util/crc32c.o
CC util/uuid.o
CC util/throttle.o
CC util/getauxval.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-ucontext.o
CC util/buffer.o
CC util/timed-average.o
CC util/log.o
CC util/base64.o
CC util/qdist.o
CC util/qht.o
CC util/range.o
CC util/systemd.o
CC trace-root.o
CC util/trace.o
CC crypto/trace.o
CC io/trace.o
CC migration/trace.o
CC block/trace.o
CC backends/trace.o
CC hw/block/trace.o
CC hw/block/dataplane/trace.o
CC hw/char/trace.o
CC hw/intc/trace.o
CC hw/net/trace.o
CC hw/virtio/trace.o
CC hw/audio/trace.o
CC hw/misc/trace.o
CC hw/usb/trace.o
CC hw/scsi/trace.o
CC hw/nvram/trace.o
CC hw/display/trace.o
CC hw/input/trace.o
CC hw/timer/trace.o
CC hw/dma/trace.o
CC hw/sparc/trace.o
CC hw/sd/trace.o
CC hw/isa/trace.o
CC hw/mem/trace.o
CC hw/i386/xen/trace.o
CC hw/i386/trace.o
CC hw/9pfs/trace.o
CC hw/ppc/trace.o
CC hw/pci/trace.o
CC hw/s390x/trace.o
CC hw/vfio/trace.o
CC hw/acpi/trace.o
CC hw/arm/trace.o
CC hw/alpha/trace.o
CC hw/xen/trace.o
CC ui/trace.o
CC audio/trace.o
CC net/trace.o
CC target/arm/trace.o
CC target/i386/trace.o
CC target/mips/trace.o
CC target/s390x/trace.o
CC target/sparc/trace.o
CC target/ppc/trace.o
CC qom/trace.o
CC linux-user/trace.o
CC qapi/trace.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/clock-warp.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/cpu-get-icount.o
CC stubs/cpu-get-clock.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/fdset.o
CC stubs/gdbstub.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/monitor.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/qmp_pc_dimm_device_list.o
CC stubs/target-monitor-defs.o
CC stubs/target-get-monitor-def.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/vmgenid.o
CC stubs/xen-common.o
CC stubs/xen-hvm.o
CC contrib/ivshmem-client/ivshmem-client.o
CC contrib/ivshmem-client/main.o
CC contrib/ivshmem-server/ivshmem-server.o
CC qemu-nbd.o
CC contrib/ivshmem-server/main.o
CC block.o
CC blockjob.o
CC qemu-io-cmds.o
CC replication.o
CC block/raw-format.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qed.o
CC block/qed-gencb.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/parallels.o
CC block/quorum.o
CC block/blkverify.o
CC block/blkdebug.o
CC block/blkreplay.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/file-posix.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/throttle-groups.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/crypto.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC crypto/init.o
CC crypto/hash.o
CC crypto/hash-glib.o
CC crypto/hmac.o
CC crypto/hmac-glib.o
CC crypto/aes.o
CC crypto/desrfb.o
CC crypto/cipher.o
CC crypto/tlscreds.o
CC crypto/tlscredsanon.o
CC crypto/tlssession.o
CC crypto/tlscredsx509.o
CC crypto/secret.o
CC crypto/random-platform.o
CC crypto/pbkdf.o
CC crypto/ivgen.o
CC crypto/ivgen-essiv.o
CC crypto/ivgen-plain.o
CC crypto/ivgen-plain64.o
CC crypto/afsplit.o
CC crypto/xts.o
CC crypto/block.o
CC crypto/block-qcow.o
CC crypto/block-luks.o
CC io/channel.o
CC io/channel-buffer.o
CC io/channel-command.o
CC io/channel-socket.o
CC io/channel-file.o
CC io/channel-tls.o
CC io/channel-watch.o
CC io/channel-websock.o
CC io/channel-util.o
CC io/task.o
CC io/dns-resolver.o
CC qom/object.o
CC qom/container.o
CC qom/qom-qobject.o
CC qom/object_interfaces.o
GEN qemu-img-cmds.h
CC qemu-io.o
CC blockdev.o
CC qemu-bridge-helper.o
CC blockdev-nbd.o
CC iothread.o
CC qdev-monitor.o
CC device-hotplug.o
CC os-posix.o
CC page_cache.o
CC accel.o
CC bt-host.o
CC bt-vhci.o
CC dma-helpers.o
CC vl.o
CC tpm.o
CC device_tree.o
CC qmp-marshal.o
CC qmp.o
CC hmp.o
CC cpus-common.o
CC audio/audio.o
CC audio/noaudio.o
CC audio/wavaudio.o
CC audio/mixeng.o
CC audio/sdlaudio.o
CC audio/ossaudio.o
CC backends/rng.o
CC backends/rng-egd.o
CC backends/rng-random.o
CC audio/wavcapture.o
CC backends/msmouse.o
CC backends/wctablet.o
CC backends/testdev.o
CC backends/tpm.o
CC backends/hostmem.o
CC backends/hostmem-ram.o
CC backends/hostmem-file.o
CC backends/cryptodev.o
CC backends/cryptodev-builtin.o
CC block/stream.o
CC disas/arm.o
CC disas/i386.o
CC fsdev/qemu-fsdev-dummy.o
CC fsdev/qemu-fsdev-opts.o
CC hw/acpi/core.o
CC fsdev/qemu-fsdev-throttle.o
CC hw/acpi/piix4.o
CC hw/acpi/pcihp.o
CC hw/acpi/ich9.o
CC hw/acpi/tco.o
CC hw/acpi/cpu_hotplug.o
CC hw/acpi/memory_hotplug.o
CC hw/acpi/cpu.o
CC hw/acpi/nvdimm.o
CC hw/acpi/vmgenid.o
CC hw/acpi/acpi_interface.o
CC hw/acpi/bios-linker-loader.o
CC hw/acpi/ipmi.o
CC hw/acpi/aml-build.o
CC hw/acpi/acpi-stub.o
CC hw/audio/sb16.o
CC hw/acpi/ipmi-stub.o
CC hw/audio/ac97.o
CC hw/audio/es1370.o
CC hw/audio/fmopl.o
CC hw/audio/adlib.o
CC hw/audio/gus.o
CC hw/audio/gusemu_hal.o
CC hw/audio/gusemu_mixer.o
CC hw/audio/cs4231a.o
CC hw/audio/intel-hda.o
CC hw/audio/hda-codec.o
CC hw/audio/pcspk.o
CC hw/audio/wm8750.o
CC hw/audio/pl041.o
CC hw/audio/lm4549.o
CC hw/audio/marvell_88w8618.o
CC hw/block/block.o
CC hw/block/cdrom.o
CC hw/block/hd-geometry.o
CC hw/block/fdc.o
CC hw/block/m25p80.o
CC hw/block/nand.o
CC hw/block/pflash_cfi01.o
CC hw/block/pflash_cfi02.o
CC hw/block/ecc.o
CC hw/block/onenand.o
CC hw/block/nvme.o
CC hw/bt/l2cap.o
CC hw/bt/core.o
CC hw/bt/sdp.o
CC hw/bt/hci.o
CC hw/bt/hid.o
CC hw/bt/hci-csr.o
CC hw/char/ipoctal232.o
CC hw/char/parallel.o
CC hw/char/pl011.o
CC hw/char/serial.o
CC hw/char/serial-isa.o
CC hw/char/virtio-console.o
CC hw/char/serial-pci.o
CC hw/char/cadence_uart.o
CC hw/char/debugcon.o
CC hw/char/imx_serial.o
CC hw/core/qdev.o
CC hw/core/qdev-properties.o
CC hw/core/bus.o
CC hw/core/reset.o
CC hw/core/fw-path-provider.o
CC hw/core/irq.o
CC hw/core/hotplug.o
CC hw/core/ptimer.o
CC hw/core/sysbus.o
CC hw/core/machine.o
CC hw/core/loader.o
CC hw/core/qdev-properties-system.o
CC hw/core/register.o
CC hw/core/or-irq.o
CC hw/core/platform-bus.o
CC hw/display/ads7846.o
CC hw/display/cirrus_vga.o
CC hw/display/pl110.o
CC hw/display/ssd0303.o
CC hw/display/ssd0323.o
CC hw/display/vga-pci.o
CC hw/display/vga-isa.o
CC hw/display/vmware_vga.o
CC hw/display/blizzard.o
CC hw/display/exynos4210_fimd.o
CC hw/display/framebuffer.o
CC hw/display/tc6393xb.o
CC hw/dma/pl080.o
CC hw/dma/pl330.o
CC hw/dma/i8257.o
CC hw/dma/xlnx-zynq-devcfg.o
CC hw/gpio/max7310.o
CC hw/gpio/pl061.o
CC hw/gpio/zaurus.o
CC hw/gpio/gpio_key.o
CC hw/i2c/core.o
CC hw/i2c/smbus.o
CC hw/i2c/smbus_eeprom.o
CC hw/i2c/i2c-ddc.o
CC hw/i2c/versatile_i2c.o
CC hw/i2c/smbus_ich9.o
CC hw/i2c/pm_smbus.o
CC hw/i2c/bitbang_i2c.o
CC hw/i2c/exynos4210_i2c.o
CC hw/i2c/imx_i2c.o
CC hw/i2c/aspeed_i2c.o
CC hw/ide/core.o
CC hw/ide/atapi.o
CC hw/ide/qdev.o
CC hw/ide/pci.o
CC hw/ide/isa.o
CC hw/ide/piix.o
CC hw/ide/microdrive.o
CC hw/ide/ahci.o
CC hw/ide/ich.o
CC hw/input/hid.o
CC hw/input/lm832x.o
CC hw/input/pckbd.o
CC hw/input/pl050.o
CC hw/input/ps2.o
CC hw/input/stellaris_input.o
CC hw/input/tsc2005.o
CC hw/input/vmmouse.o
CC hw/input/virtio-input.o
CC hw/input/virtio-input-hid.o
CC hw/input/virtio-input-host.o
CC hw/intc/i8259_common.o
CC hw/intc/i8259.o
CC hw/intc/pl190.o
CC hw/intc/imx_avic.o
CC hw/intc/realview_gic.o
CC hw/intc/ioapic_common.o
CC hw/intc/arm_gic_common.o
CC hw/intc/arm_gic.o
CC hw/intc/arm_gicv2m.o
CC hw/intc/arm_gicv3_common.o
CC hw/intc/arm_gicv3.o
CC hw/intc/arm_gicv3_dist.o
CC hw/intc/arm_gicv3_redist.o
CC hw/intc/arm_gicv3_its_common.o
CC hw/intc/intc.o
CC hw/ipack/ipack.o
CC hw/ipack/tpci200.o
CC hw/ipmi/ipmi.o
CC hw/ipmi/ipmi_bmc_sim.o
CC hw/ipmi/ipmi_bmc_extern.o
CC hw/ipmi/isa_ipmi_kcs.o
CC hw/ipmi/isa_ipmi_bt.o
CC hw/isa/isa-bus.o
CC hw/isa/apm.o
CC hw/mem/pc-dimm.o
CC hw/mem/nvdimm.o
CC hw/misc/applesmc.o
CC hw/misc/max111x.o
CC hw/misc/tmp105.o
CC hw/misc/debugexit.o
CC hw/misc/sga.o
CC hw/misc/pc-testdev.o
CC hw/misc/pci-testdev.o
CC hw/misc/unimp.o
CC hw/misc/arm_l2x0.o
CC hw/misc/arm_integrator_debug.o
CC hw/misc/a9scu.o
CC hw/misc/arm11scu.o
CC hw/net/ne2000.o
CC hw/net/eepro100.o
CC hw/net/pcnet-pci.o
CC hw/net/pcnet.o
CC hw/net/e1000.o
CC hw/net/e1000x_common.o
CC hw/net/net_tx_pkt.o
CC hw/net/net_rx_pkt.o
CC hw/net/e1000e.o
CC hw/net/e1000e_core.o
CC hw/net/rtl8139.o
CC hw/net/vmxnet3.o
CC hw/net/smc91c111.o
CC hw/net/lan9118.o
CC hw/net/ne2000-isa.o
CC hw/net/xgmac.o
CC hw/net/allwinner_emac.o
CC hw/net/imx_fec.o
CC hw/net/cadence_gem.o
CC hw/net/stellaris_enet.o
CC hw/net/ftgmac100.o
CC hw/net/rocker/rocker.o
CC hw/net/rocker/rocker_fp.o
CC hw/net/rocker/rocker_desc.o
CC hw/net/rocker/rocker_world.o
CC hw/net/rocker/rocker_of_dpa.o
CC hw/nvram/eeprom93xx.o
CC hw/nvram/fw_cfg.o
CC hw/nvram/chrp_nvram.o
CC hw/pci-bridge/pci_bridge_dev.o
CC hw/pci-bridge/pcie_root_port.o
CC hw/pci-bridge/gen_pcie_root_port.o
CC hw/pci-bridge/pci_expander_bridge.o
CC hw/pci-bridge/xio3130_upstream.o
CC hw/pci-bridge/xio3130_downstream.o
CC hw/pci-bridge/ioh3420.o
CC hw/pci-bridge/i82801b11.o
CC hw/pci-host/pam.o
CC hw/pci-host/versatile.o
CC hw/pci-host/piix.o
CC hw/pci-host/q35.o
CC hw/pci-host/gpex.o
CC hw/pci/pci.o
CC hw/pci/pci_bridge.o
CC hw/pci/msix.o
CC hw/pci/msi.o
CC hw/pci/shpc.o
CC hw/pci/slotid_cap.o
CC hw/pci/pci_host.o
CC hw/pci/pcie_host.o
CC hw/pci/pcie.o
CC hw/pci/pcie_aer.o
CC hw/pci/pcie_port.o
CC hw/pci/pci-stub.o
CC hw/pcmcia/pcmcia.o
CC hw/scsi/scsi-disk.o
CC hw/scsi/scsi-generic.o
CC hw/scsi/scsi-bus.o
CC hw/scsi/lsi53c895a.o
CC hw/scsi/mptsas.o
CC hw/scsi/mptconfig.o
CC hw/scsi/mptendian.o
CC hw/scsi/megasas.o
CC hw/scsi/vmw_pvscsi.o
CC hw/scsi/esp.o
CC hw/scsi/esp-pci.o
CC hw/sd/pl181.o
CC hw/sd/ssi-sd.o
CC hw/sd/sd.o
CC hw/sd/core.o
CC hw/sd/sdhci.o
CC hw/smbios/smbios.o
CC hw/smbios/smbios_type_38.o
CC hw/smbios/smbios-stub.o
CC hw/smbios/smbios_type_38-stub.o
CC hw/ssi/pl022.o
CC hw/ssi/ssi.o
CC hw/ssi/xilinx_spips.o
CC hw/ssi/aspeed_smc.o
CC hw/ssi/stm32f2xx_spi.o
CC hw/timer/arm_mptimer.o
CC hw/timer/arm_timer.o
CC hw/timer/armv7m_systick.o
CC hw/timer/a9gtimer.o
CC hw/timer/cadence_ttc.o
CC hw/timer/ds1338.o
CC hw/timer/hpet.o
CC hw/timer/i8254_common.o
CC hw/timer/i8254.o
CC hw/timer/pl031.o
CC hw/timer/twl92230.o
CC hw/timer/imx_epit.o
CC hw/timer/imx_gpt.o
CC hw/timer/stm32f2xx_timer.o
CC hw/timer/aspeed_timer.o
CC hw/tpm/tpm_tis.o
CC hw/tpm/tpm_passthrough.o
CC hw/tpm/tpm_util.o
CC hw/usb/core.o
CC hw/usb/combined-packet.o
CC hw/usb/bus.o
CC hw/usb/libhw.o
CC hw/usb/desc.o
CC hw/usb/desc-msos.o
CC hw/usb/hcd-uhci.o
CC hw/usb/hcd-ohci.o
CC hw/usb/hcd-ehci.o
CC hw/usb/hcd-ehci-pci.o
CC hw/usb/hcd-ehci-sysbus.o
CC hw/usb/hcd-xhci.o
CC hw/usb/hcd-musb.o
CC hw/usb/dev-hub.o
CC hw/usb/dev-hid.o
CC hw/usb/dev-wacom.o
CC hw/usb/dev-uas.o
CC hw/usb/dev-storage.o
CC hw/usb/dev-serial.o
CC hw/usb/dev-audio.o
CC hw/usb/dev-network.o
CC hw/usb/dev-bluetooth.o
CC hw/usb/dev-smartcard-reader.o
CC hw/usb/dev-mtp.o
CC hw/usb/host-stub.o
CC hw/virtio/virtio-rng.o
CC hw/virtio/virtio-pci.o
CC hw/virtio/virtio-mmio.o
CC hw/virtio/virtio-bus.o
CC hw/virtio/vhost-pci-slave.o
CC hw/watchdog/watchdog.o
CC hw/watchdog/wdt_i6300esb.o
CC hw/watchdog/wdt_ib700.o
CC hw/watchdog/wdt_aspeed.o
CC migration/migration.o
CC migration/socket.o
CC migration/fd.o
CC migration/exec.o
CC migration/tls.o
CC migration/colo-comm.o
CC migration/colo.o
CC migration/colo-failover.o
CC migration/vmstate.o
CC migration/qemu-file.o
CC migration/qemu-file-channel.o
CC migration/xbzrle.o
CC migration/postcopy-ram.o
CC migration/qjson.o
CC migration/block.o
CC net/net.o
CC net/queue.o
CC net/util.o
CC net/checksum.o
CC net/hub.o
CC net/socket.o
CC net/dump.o
CC net/eth.o
CC net/tap.o
CC net/l2tpv3.o
CC net/vhost-user.o
CC net/tap-linux.o
CC net/slirp.o
CC net/filter.o
CC net/filter-buffer.o
CC net/colo-compare.o
CC net/colo.o
CC net/filter-mirror.o
CC net/filter-rewriter.o
CC net/filter-replay.o
CC qom/cpu.o
CC replay/replay.o
CC replay/replay-internal.o
CC replay/replay-events.o
/tmp/qemu-test/src/replay/replay-internal.c: In function ‘replay_put_array’:
/tmp/qemu-test/src/replay/replay-internal.c:65: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
CC replay/replay-time.o
CC replay/replay-input.o
CC replay/replay-char.o
CC replay/replay-snapshot.o
CC replay/replay-net.o
CC replay/replay-audio.o
CC slirp/cksum.o
CC slirp/ip_icmp.o
CC slirp/ip6_input.o
CC slirp/if.o
CC slirp/ip6_icmp.o
CC slirp/ip6_output.o
CC slirp/ip_input.o
CC slirp/ip_output.o
CC slirp/dnssearch.o
CC slirp/dhcpv6.o
CC slirp/slirp.o
CC slirp/mbuf.o
CC slirp/misc.o
CC slirp/sbuf.o
CC slirp/socket.o
CC slirp/tcp_input.o
CC slirp/tcp_output.o
CC slirp/tcp_subr.o
/tmp/qemu-test/src/slirp/tcp_input.c: In function ‘tcp_input’:
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_p’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_len’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_tos’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_id’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_off’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_ttl’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_sum’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_src.s_addr’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_dst.s_addr’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:220: warning: ‘save_ip6.ip_nh’ may be used uninitialized in this function
CC slirp/tcp_timer.o
CC slirp/udp6.o
CC slirp/bootp.o
CC slirp/udp.o
CC slirp/arp_table.o
CC slirp/tftp.o
CC slirp/ndp_table.o
CC slirp/ncsi.o
CC ui/keymaps.o
CC ui/console.o
CC ui/cursor.o
CC ui/input.o
CC ui/input-keymap.o
CC ui/qemu-pixman.o
CC ui/input-legacy.o
CC ui/input-linux.o
CC ui/sdl.o
CC ui/sdl_zoom.o
CC ui/x_keymap.o
CC ui/vnc-enc-hextile.o
CC ui/vnc.o
CC ui/vnc-enc-tight.o
CC ui/vnc-palette.o
CC ui/vnc-enc-zlib.o
CC ui/vnc-enc-zrle.o
CC ui/vnc-auth-vencrypt.o
CC ui/vnc-ws.o
CC ui/vnc-jobs.o
CC chardev/char.o
CC chardev/char-fd.o
CC chardev/char-file.o
CC chardev/char-io.o
CC chardev/char-mux.o
CC chardev/char-parallel.o
CC chardev/char-null.o
CC chardev/char-pipe.o
CC chardev/char-pty.o
CC chardev/char-serial.o
CC chardev/char-stdio.o
CC chardev/char-socket.o
CC chardev/char-udp.o
CC chardev/char-ringbuf.o
LINK tests/qemu-iotests/socket_scm_helper
CC qga/guest-agent-command-state.o
CC qga/commands.o
CC qga/main.o
CC qga/commands-posix.o
CC qga/channel-posix.o
CC qga/qapi-generated/qga-qapi-types.o
CC qga/qapi-generated/qga-qapi-visit.o
CC qga/qapi-generated/qga-qmp-marshal.o
AR libqemuutil.a
AR libqemustub.a
CC qemu-img.o
AS optionrom/multiboot.o
AS optionrom/linuxboot.o
CC optionrom/linuxboot_dma.o
cc: unrecognized option '-no-integrated-as'
cc: unrecognized option '-no-integrated-as'
AS optionrom/kvmvapic.o
BUILD optionrom/linuxboot.img
BUILD optionrom/linuxboot_dma.img
BUILD optionrom/multiboot.img
BUILD optionrom/linuxboot.raw
BUILD optionrom/multiboot.raw
BUILD optionrom/linuxboot_dma.raw
BUILD optionrom/kvmvapic.img
BUILD optionrom/kvmvapic.raw
SIGN optionrom/multiboot.bin
SIGN optionrom/kvmvapic.bin
SIGN optionrom/linuxboot.bin
SIGN optionrom/linuxboot_dma.bin
LINK ivshmem-client
LINK ivshmem-server
LINK qemu-nbd
LINK qemu-img
LINK qemu-io
LINK qemu-bridge-helper
GEN aarch64-softmmu/config-target.h
GEN aarch64-softmmu/hmp-commands.h
GEN aarch64-softmmu/hmp-commands-info.h
GEN x86_64-softmmu/config-target.h
GEN x86_64-softmmu/hmp-commands.h
GEN x86_64-softmmu/hmp-commands-info.h
CC aarch64-softmmu/exec.o
CC aarch64-softmmu/translate-all.o
CC aarch64-softmmu/cpu-exec.o
CC aarch64-softmmu/translate-common.o
CC aarch64-softmmu/cpu-exec-common.o
CC aarch64-softmmu/tcg/tcg.o
CC aarch64-softmmu/tcg/tcg-op.o
CC aarch64-softmmu/tcg/tcg-common.o
CC aarch64-softmmu/disas.o
CC aarch64-softmmu/fpu/softfloat.o
CC aarch64-softmmu/tcg/optimize.o
CC aarch64-softmmu/tcg-runtime.o
CC x86_64-softmmu/exec.o
CC x86_64-softmmu/translate-all.o
GEN aarch64-softmmu/gdbstub-xml.c
CC aarch64-softmmu/hax-stub.o
CC aarch64-softmmu/kvm-stub.o
CC aarch64-softmmu/arch_init.o
CC x86_64-softmmu/cpu-exec.o
CC aarch64-softmmu/cpus.o
CC aarch64-softmmu/monitor.o
CC x86_64-softmmu/translate-common.o
CC aarch64-softmmu/gdbstub.o
CC x86_64-softmmu/cpu-exec-common.o
CC aarch64-softmmu/balloon.o
CC aarch64-softmmu/ioport.o
CC x86_64-softmmu/tcg/tcg.o
CC x86_64-softmmu/tcg/tcg-op.o
CC x86_64-softmmu/tcg/optimize.o
CC aarch64-softmmu/numa.o
CC aarch64-softmmu/qtest.o
CC aarch64-softmmu/bootdevice.o
CC aarch64-softmmu/memory.o
CC aarch64-softmmu/cputlb.o
CC x86_64-softmmu/tcg/tcg-common.o
CC aarch64-softmmu/memory_mapping.o
CC aarch64-softmmu/dump.o
CC aarch64-softmmu/migration/ram.o
CC aarch64-softmmu/migration/savevm.o
CC aarch64-softmmu/hw/adc/stm32f2xx_adc.o
CC aarch64-softmmu/hw/block/virtio-blk.o
CC aarch64-softmmu/hw/block/dataplane/virtio-blk.o
CC x86_64-softmmu/fpu/softfloat.o
CC aarch64-softmmu/hw/char/exynos4210_uart.o
CC aarch64-softmmu/hw/char/omap_uart.o
CC x86_64-softmmu/disas.o
CC x86_64-softmmu/tcg-runtime.o
CC aarch64-softmmu/hw/char/digic-uart.o
GEN x86_64-softmmu/gdbstub-xml.c
CC aarch64-softmmu/hw/char/stm32f2xx_usart.o
CC aarch64-softmmu/hw/char/bcm2835_aux.o
CC aarch64-softmmu/hw/char/virtio-serial-bus.o
CC aarch64-softmmu/hw/core/nmi.o
CC x86_64-softmmu/hax-stub.o
CC x86_64-softmmu/arch_init.o
CC aarch64-softmmu/hw/core/generic-loader.o
CC aarch64-softmmu/hw/core/null-machine.o
CC aarch64-softmmu/hw/cpu/arm11mpcore.o
CC x86_64-softmmu/cpus.o
CC x86_64-softmmu/monitor.o
CC aarch64-softmmu/hw/cpu/realview_mpcore.o
CC aarch64-softmmu/hw/cpu/a9mpcore.o
CC aarch64-softmmu/hw/cpu/a15mpcore.o
CC aarch64-softmmu/hw/cpu/core.o
CC aarch64-softmmu/hw/display/omap_dss.o
CC aarch64-softmmu/hw/display/omap_lcdc.o
CC x86_64-softmmu/gdbstub.o
CC x86_64-softmmu/balloon.o
CC aarch64-softmmu/hw/display/pxa2xx_lcd.o
CC x86_64-softmmu/ioport.o
CC x86_64-softmmu/numa.o
CC x86_64-softmmu/qtest.o
CC x86_64-softmmu/bootdevice.o
CC x86_64-softmmu/kvm-all.o
CC aarch64-softmmu/hw/display/bcm2835_fb.o
CC aarch64-softmmu/hw/display/vga.o
CC aarch64-softmmu/hw/display/virtio-gpu.o
CC x86_64-softmmu/memory.o
CC x86_64-softmmu/cputlb.o
CC x86_64-softmmu/memory_mapping.o
CC aarch64-softmmu/hw/display/virtio-gpu-3d.o
LINK qemu-ga
CC x86_64-softmmu/dump.o
CC x86_64-softmmu/migration/ram.o
CC aarch64-softmmu/hw/display/virtio-gpu-pci.o
CC aarch64-softmmu/hw/display/dpcd.o
CC aarch64-softmmu/hw/display/xlnx_dp.o
CC aarch64-softmmu/hw/dma/xlnx_dpdma.o
CC aarch64-softmmu/hw/dma/omap_dma.o
CC aarch64-softmmu/hw/dma/soc_dma.o
CC x86_64-softmmu/migration/savevm.o
CC aarch64-softmmu/hw/dma/pxa2xx_dma.o
CC aarch64-softmmu/hw/dma/bcm2835_dma.o
CC x86_64-softmmu/hw/block/virtio-blk.o
CC aarch64-softmmu/hw/gpio/omap_gpio.o
CC x86_64-softmmu/hw/block/dataplane/virtio-blk.o
CC aarch64-softmmu/hw/gpio/imx_gpio.o
CC aarch64-softmmu/hw/gpio/bcm2835_gpio.o
CC aarch64-softmmu/hw/i2c/omap_i2c.o
CC aarch64-softmmu/hw/input/pxa2xx_keypad.o
CC aarch64-softmmu/hw/input/tsc210x.o
CC aarch64-softmmu/hw/intc/armv7m_nvic.o
CC x86_64-softmmu/hw/char/virtio-serial-bus.o
CC aarch64-softmmu/hw/intc/exynos4210_gic.o
CC aarch64-softmmu/hw/intc/exynos4210_combiner.o
CC aarch64-softmmu/hw/intc/omap_intc.o
CC aarch64-softmmu/hw/intc/bcm2835_ic.o
CC aarch64-softmmu/hw/intc/bcm2836_control.o
CC aarch64-softmmu/hw/intc/allwinner-a10-pic.o
CC aarch64-softmmu/hw/intc/aspeed_vic.o
CC aarch64-softmmu/hw/intc/arm_gicv3_cpuif.o
CC aarch64-softmmu/hw/misc/ivshmem.o
CC aarch64-softmmu/hw/misc/arm_sysctl.o
CC aarch64-softmmu/hw/misc/cbus.o
CC aarch64-softmmu/hw/misc/exynos4210_pmu.o
CC aarch64-softmmu/hw/misc/exynos4210_clk.o
CC aarch64-softmmu/hw/misc/imx_ccm.o
CC aarch64-softmmu/hw/misc/imx31_ccm.o
CC aarch64-softmmu/hw/misc/imx25_ccm.o
CC aarch64-softmmu/hw/misc/imx6_ccm.o
CC aarch64-softmmu/hw/misc/imx6_src.o
CC aarch64-softmmu/hw/misc/mst_fpga.o
CC x86_64-softmmu/hw/core/nmi.o
CC aarch64-softmmu/hw/misc/omap_clk.o
CC aarch64-softmmu/hw/misc/omap_gpmc.o
CC aarch64-softmmu/hw/misc/omap_l4.o
CC aarch64-softmmu/hw/misc/omap_sdrc.o
CC x86_64-softmmu/hw/core/generic-loader.o
CC aarch64-softmmu/hw/misc/omap_tap.o
CC x86_64-softmmu/hw/core/null-machine.o
CC aarch64-softmmu/hw/misc/bcm2835_mbox.o
CC x86_64-softmmu/hw/cpu/core.o
CC aarch64-softmmu/hw/misc/bcm2835_property.o
CC x86_64-softmmu/hw/display/vga.o
CC aarch64-softmmu/hw/misc/bcm2835_rng.o
CC aarch64-softmmu/hw/misc/zynq_slcr.o
CC x86_64-softmmu/hw/display/virtio-gpu.o
CC aarch64-softmmu/hw/misc/zynq-xadc.o
CC aarch64-softmmu/hw/misc/stm32f2xx_syscfg.o
CC x86_64-softmmu/hw/display/virtio-gpu-3d.o
CC aarch64-softmmu/hw/misc/edu.o
CC x86_64-softmmu/hw/display/virtio-gpu-pci.o
CC x86_64-softmmu/hw/display/virtio-vga.o
CC x86_64-softmmu/hw/intc/apic.o
CC x86_64-softmmu/hw/intc/apic_common.o
CC x86_64-softmmu/hw/intc/ioapic.o
CC x86_64-softmmu/hw/isa/lpc_ich9.o
CC x86_64-softmmu/hw/misc/vmport.o
CC aarch64-softmmu/hw/misc/auxbus.o
CC x86_64-softmmu/hw/misc/ivshmem.o
CC aarch64-softmmu/hw/misc/aspeed_scu.o
CC x86_64-softmmu/hw/misc/pvpanic.o
CC aarch64-softmmu/hw/misc/aspeed_sdmc.o
CC x86_64-softmmu/hw/misc/edu.o
CC x86_64-softmmu/hw/misc/hyperv_testdev.o
CC x86_64-softmmu/hw/net/virtio-net.o
CC x86_64-softmmu/hw/net/vhost-pci-net.o
CC x86_64-softmmu/hw/net/vhost_net.o
CC x86_64-softmmu/hw/scsi/virtio-scsi.o
CC x86_64-softmmu/hw/scsi/virtio-scsi-dataplane.o
CC x86_64-softmmu/hw/scsi/vhost-scsi-common.o
CC x86_64-softmmu/hw/scsi/vhost-scsi.o
CC aarch64-softmmu/hw/net/virtio-net.o
CC aarch64-softmmu/hw/net/vhost-pci-net.o
CC aarch64-softmmu/hw/net/vhost_net.o
CC x86_64-softmmu/hw/timer/mc146818rtc.o
CC x86_64-softmmu/hw/vfio/common.o
CC aarch64-softmmu/hw/pcmcia/pxa2xx.o
CC aarch64-softmmu/hw/scsi/virtio-scsi.o
CC aarch64-softmmu/hw/scsi/virtio-scsi-dataplane.o
CC x86_64-softmmu/hw/vfio/pci.o
CC aarch64-softmmu/hw/scsi/vhost-scsi-common.o
CC x86_64-softmmu/hw/vfio/pci-quirks.o
CC aarch64-softmmu/hw/scsi/vhost-scsi.o
CC aarch64-softmmu/hw/sd/omap_mmc.o
CC aarch64-softmmu/hw/sd/pxa2xx_mmci.o
CC aarch64-softmmu/hw/sd/bcm2835_sdhost.o
CC aarch64-softmmu/hw/ssi/omap_spi.o
CC x86_64-softmmu/hw/vfio/platform.o
CC aarch64-softmmu/hw/ssi/imx_spi.o
CC aarch64-softmmu/hw/timer/exynos4210_mct.o
CC x86_64-softmmu/hw/vfio/spapr.o
CC x86_64-softmmu/hw/virtio/virtio.o
CC aarch64-softmmu/hw/timer/exynos4210_pwm.o
CC aarch64-softmmu/hw/timer/exynos4210_rtc.o
CC x86_64-softmmu/hw/virtio/virtio-balloon.o
CC aarch64-softmmu/hw/timer/omap_gptimer.o
CC aarch64-softmmu/hw/timer/omap_synctimer.o
CC aarch64-softmmu/hw/timer/pxa2xx_timer.o
CC aarch64-softmmu/hw/timer/digic-timer.o
CC aarch64-softmmu/hw/timer/allwinner-a10-pit.o
CC x86_64-softmmu/hw/virtio/vhost.o
CC x86_64-softmmu/hw/virtio/vhost-backend.o
CC x86_64-softmmu/hw/virtio/vhost-user.o
CC x86_64-softmmu/hw/virtio/vhost-vsock.o
CC x86_64-softmmu/hw/virtio/virtio-crypto.o
CC aarch64-softmmu/hw/usb/tusb6010.o
CC aarch64-softmmu/hw/vfio/common.o
CC x86_64-softmmu/hw/virtio/virtio-crypto-pci.o
CC aarch64-softmmu/hw/vfio/pci.o
CC x86_64-softmmu/hw/i386/multiboot.o
CC x86_64-softmmu/hw/i386/pc.o
CC x86_64-softmmu/hw/i386/pc_piix.o
CC x86_64-softmmu/hw/i386/pc_q35.o
CC x86_64-softmmu/hw/i386/pc_sysfw.o
/tmp/qemu-test/src/hw/i386/pc_piix.c: In function ‘igd_passthrough_isa_bridge_create’:
/tmp/qemu-test/src/hw/i386/pc_piix.c:1055: warning: ‘pch_rev_id’ may be used uninitialized in this function
CC x86_64-softmmu/hw/i386/x86-iommu.o
CC x86_64-softmmu/hw/i386/intel_iommu.o
CC x86_64-softmmu/hw/i386/amd_iommu.o
CC x86_64-softmmu/hw/i386/kvmvapic.o
CC x86_64-softmmu/hw/i386/acpi-build.o
CC aarch64-softmmu/hw/vfio/pci-quirks.o
CC x86_64-softmmu/hw/i386/pci-assign-load-rom.o
CC x86_64-softmmu/hw/i386/kvm/clock.o
CC aarch64-softmmu/hw/vfio/platform.o
CC aarch64-softmmu/hw/vfio/calxeda-xgmac.o
CC aarch64-softmmu/hw/vfio/amd-xgbe.o
CC x86_64-softmmu/hw/i386/kvm/apic.o
CC aarch64-softmmu/hw/vfio/spapr.o
CC aarch64-softmmu/hw/virtio/virtio.o
CC aarch64-softmmu/hw/virtio/virtio-balloon.o
CC aarch64-softmmu/hw/virtio/vhost.o
CC aarch64-softmmu/hw/virtio/vhost-backend.o
CC aarch64-softmmu/hw/virtio/vhost-user.o
CC x86_64-softmmu/hw/i386/kvm/i8259.o
/tmp/qemu-test/src/hw/i386/acpi-build.c: In function ‘build_append_pci_bus_devices’:
/tmp/qemu-test/src/hw/i386/acpi-build.c:525: warning: ‘notify_method’ may be used uninitialized in this function
CC aarch64-softmmu/hw/virtio/vhost-vsock.o
CC aarch64-softmmu/hw/virtio/virtio-crypto.o
CC x86_64-softmmu/hw/i386/kvm/ioapic.o
CC x86_64-softmmu/hw/i386/kvm/i8254.o
CC aarch64-softmmu/hw/virtio/virtio-crypto-pci.o
CC x86_64-softmmu/hw/i386/kvm/pci-assign.o
CC x86_64-softmmu/target/i386/translate.o
CC x86_64-softmmu/target/i386/helper.o
CC x86_64-softmmu/target/i386/cpu.o
CC x86_64-softmmu/target/i386/bpt_helper.o
CC x86_64-softmmu/target/i386/excp_helper.o
CC aarch64-softmmu/hw/arm/boot.o
CC aarch64-softmmu/hw/arm/collie.o
CC x86_64-softmmu/target/i386/fpu_helper.o
CC aarch64-softmmu/hw/arm/exynos4_boards.o
CC x86_64-softmmu/target/i386/cc_helper.o
CC x86_64-softmmu/target/i386/int_helper.o
CC x86_64-softmmu/target/i386/svm_helper.o
CC x86_64-softmmu/target/i386/smm_helper.o
CC x86_64-softmmu/target/i386/misc_helper.o
CC aarch64-softmmu/hw/arm/gumstix.o
CC x86_64-softmmu/target/i386/mem_helper.o
CC x86_64-softmmu/target/i386/mpx_helper.o
CC x86_64-softmmu/target/i386/seg_helper.o
CC x86_64-softmmu/target/i386/gdbstub.o
CC aarch64-softmmu/hw/arm/highbank.o
CC aarch64-softmmu/hw/arm/digic_boards.o
CC x86_64-softmmu/target/i386/machine.o
CC x86_64-softmmu/target/i386/arch_memory_mapping.o
CC x86_64-softmmu/target/i386/arch_dump.o
CC x86_64-softmmu/target/i386/monitor.o
CC x86_64-softmmu/target/i386/kvm.o
CC aarch64-softmmu/hw/arm/integratorcp.o
CC aarch64-softmmu/hw/arm/mainstone.o
CC aarch64-softmmu/hw/arm/musicpal.o
CC aarch64-softmmu/hw/arm/nseries.o
CC aarch64-softmmu/hw/arm/omap_sx1.o
CC x86_64-softmmu/target/i386/hyperv.o
CC aarch64-softmmu/hw/arm/palm.o
CC aarch64-softmmu/hw/arm/realview.o
GEN trace/generated-helpers.c
CC x86_64-softmmu/trace/control-target.o
CC aarch64-softmmu/hw/arm/spitz.o
CC x86_64-softmmu/gdbstub-xml.o
CC aarch64-softmmu/hw/arm/stellaris.o
CC aarch64-softmmu/hw/arm/tosa.o
CC aarch64-softmmu/hw/arm/versatilepb.o
CC aarch64-softmmu/hw/arm/vexpress.o
CC aarch64-softmmu/hw/arm/virt.o
CC aarch64-softmmu/hw/arm/xilinx_zynq.o
CC aarch64-softmmu/hw/arm/z2.o
CC aarch64-softmmu/hw/arm/virt-acpi-build.o
CC aarch64-softmmu/hw/arm/netduino2.o
CC x86_64-softmmu/trace/generated-helpers.o
CC aarch64-softmmu/hw/arm/sysbus-fdt.o
CC aarch64-softmmu/hw/arm/armv7m.o
CC aarch64-softmmu/hw/arm/exynos4210.o
CC aarch64-softmmu/hw/arm/pxa2xx.o
CC aarch64-softmmu/hw/arm/pxa2xx_gpio.o
CC aarch64-softmmu/hw/arm/pxa2xx_pic.o
CC aarch64-softmmu/hw/arm/digic.o
CC aarch64-softmmu/hw/arm/omap1.o
CC aarch64-softmmu/hw/arm/omap2.o
CC aarch64-softmmu/hw/arm/strongarm.o
CC aarch64-softmmu/hw/arm/allwinner-a10.o
CC aarch64-softmmu/hw/arm/cubieboard.o
CC aarch64-softmmu/hw/arm/bcm2835_peripherals.o
CC aarch64-softmmu/hw/arm/bcm2836.o
CC aarch64-softmmu/hw/arm/raspi.o
CC aarch64-softmmu/hw/arm/stm32f205_soc.o
CC aarch64-softmmu/hw/arm/xlnx-zynqmp.o
CC aarch64-softmmu/hw/arm/xlnx-ep108.o
CC aarch64-softmmu/hw/arm/fsl-imx25.o
CC aarch64-softmmu/hw/arm/imx25_pdk.o
CC aarch64-softmmu/hw/arm/fsl-imx31.o
CC aarch64-softmmu/hw/arm/kzm.o
CC aarch64-softmmu/hw/arm/fsl-imx6.o
CC aarch64-softmmu/hw/arm/sabrelite.o
CC aarch64-softmmu/hw/arm/aspeed_soc.o
CC aarch64-softmmu/hw/arm/aspeed.o
CC aarch64-softmmu/target/arm/arm-semi.o
CC aarch64-softmmu/target/arm/machine.o
CC aarch64-softmmu/target/arm/psci.o
CC aarch64-softmmu/target/arm/arch_dump.o
CC aarch64-softmmu/target/arm/monitor.o
CC aarch64-softmmu/target/arm/kvm-stub.o
CC aarch64-softmmu/target/arm/translate.o
CC aarch64-softmmu/target/arm/op_helper.o
CC aarch64-softmmu/target/arm/helper.o
CC aarch64-softmmu/target/arm/cpu.o
CC aarch64-softmmu/target/arm/neon_helper.o
CC aarch64-softmmu/target/arm/iwmmxt_helper.o
CC aarch64-softmmu/target/arm/gdbstub.o
CC aarch64-softmmu/target/arm/cpu64.o
CC aarch64-softmmu/target/arm/translate-a64.o
CC aarch64-softmmu/target/arm/helper-a64.o
CC aarch64-softmmu/target/arm/crypto_helper.o
CC aarch64-softmmu/target/arm/gdbstub64.o
CC aarch64-softmmu/target/arm/arm-powerctl.o
GEN trace/generated-helpers.c
CC aarch64-softmmu/trace/control-target.o
CC aarch64-softmmu/gdbstub-xml.o
CC aarch64-softmmu/trace/generated-helpers.o
/tmp/qemu-test/src/target/arm/translate-a64.c: In function ‘handle_shri_with_rndacc’:
/tmp/qemu-test/src/target/arm/translate-a64.c:6359: warning: ‘tcg_src_hi’ may be used uninitialized in this function
/tmp/qemu-test/src/target/arm/translate-a64.c: In function ‘disas_simd_scalar_two_reg_misc’:
/tmp/qemu-test/src/target/arm/translate-a64.c:8086: warning: ‘rmode’ may be used uninitialized in this function
LINK aarch64-softmmu/qemu-system-aarch64
LINK x86_64-softmmu/qemu-system-x86_64
BISON dtc-parser.tab.c
make[1]: bison: Command not found
LEX convert-dtsv0-lexer.lex.c
make[1]: flex: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
TEST tests/qapi-schema/alternate-any.out
TEST tests/qapi-schema/alternate-array.out
TEST tests/qapi-schema/alternate-conflict-dict.out
TEST tests/qapi-schema/alternate-clash.out
TEST tests/qapi-schema/alternate-conflict-string.out
TEST tests/qapi-schema/alternate-empty.out
TEST tests/qapi-schema/alternate-base.out
TEST tests/qapi-schema/alternate-nested.out
TEST tests/qapi-schema/alternate-unknown.out
TEST tests/qapi-schema/args-alternate.out
TEST tests/qapi-schema/args-any.out
TEST tests/qapi-schema/args-array-empty.out
TEST tests/qapi-schema/args-array-unknown.out
TEST tests/qapi-schema/args-bad-boxed.out
TEST tests/qapi-schema/args-boxed-anon.out
TEST tests/qapi-schema/args-boxed-empty.out
TEST tests/qapi-schema/args-boxed-string.out
TEST tests/qapi-schema/args-int.out
TEST tests/qapi-schema/args-invalid.out
TEST tests/qapi-schema/args-member-array-bad.out
TEST tests/qapi-schema/args-member-case.out
TEST tests/qapi-schema/args-member-unknown.out
TEST tests/qapi-schema/args-name-clash.out
TEST tests/qapi-schema/args-union.out
TEST tests/qapi-schema/args-unknown.out
TEST tests/qapi-schema/bad-base.out
TEST tests/qapi-schema/bad-data.out
TEST tests/qapi-schema/bad-ident.out
TEST tests/qapi-schema/bad-type-bool.out
TEST tests/qapi-schema/bad-type-dict.out
TEST tests/qapi-schema/bad-type-int.out
TEST tests/qapi-schema/base-cycle-direct.out
TEST tests/qapi-schema/base-cycle-indirect.out
TEST tests/qapi-schema/command-int.out
TEST tests/qapi-schema/comments.out
TEST tests/qapi-schema/doc-bad-alternate-member.out
TEST tests/qapi-schema/doc-bad-command-arg.out
TEST tests/qapi-schema/doc-bad-symbol.out
TEST tests/qapi-schema/doc-bad-union-member.out
TEST tests/qapi-schema/doc-before-include.out
TEST tests/qapi-schema/doc-before-pragma.out
TEST tests/qapi-schema/doc-duplicated-arg.out
TEST tests/qapi-schema/doc-duplicated-return.out
TEST tests/qapi-schema/doc-duplicated-since.out
TEST tests/qapi-schema/doc-empty-arg.out
TEST tests/qapi-schema/doc-empty-section.out
TEST tests/qapi-schema/doc-empty-symbol.out
TEST tests/qapi-schema/doc-good.out
TEST tests/qapi-schema/doc-interleaved-section.out
TEST tests/qapi-schema/doc-invalid-end.out
TEST tests/qapi-schema/doc-invalid-end2.out
TEST tests/qapi-schema/doc-invalid-return.out
TEST tests/qapi-schema/doc-invalid-section.out
TEST tests/qapi-schema/doc-invalid-start.out
TEST tests/qapi-schema/doc-missing.out
TEST tests/qapi-schema/doc-missing-colon.out
TEST tests/qapi-schema/doc-missing-expr.out
TEST tests/qapi-schema/doc-missing-space.out
TEST tests/qapi-schema/doc-no-symbol.out
TEST tests/qapi-schema/double-type.out
TEST tests/qapi-schema/double-data.out
TEST tests/qapi-schema/duplicate-key.out
TEST tests/qapi-schema/empty.out
TEST tests/qapi-schema/enum-bad-name.out
TEST tests/qapi-schema/enum-bad-prefix.out
TEST tests/qapi-schema/enum-clash-member.out
TEST tests/qapi-schema/enum-dict-member.out
TEST tests/qapi-schema/enum-int-member.out
TEST tests/qapi-schema/enum-member-case.out
TEST tests/qapi-schema/enum-missing-data.out
TEST tests/qapi-schema/enum-wrong-data.out
TEST tests/qapi-schema/escape-outside-string.out
TEST tests/qapi-schema/escape-too-big.out
TEST tests/qapi-schema/escape-too-short.out
TEST tests/qapi-schema/event-boxed-empty.out
TEST tests/qapi-schema/event-case.out
TEST tests/qapi-schema/event-nest-struct.out
TEST tests/qapi-schema/flat-union-array-branch.out
TEST tests/qapi-schema/flat-union-bad-base.out
TEST tests/qapi-schema/flat-union-bad-discriminator.out
TEST tests/qapi-schema/flat-union-base-any.out
TEST tests/qapi-schema/flat-union-base-union.out
TEST tests/qapi-schema/flat-union-clash-member.out
TEST tests/qapi-schema/flat-union-empty.out
TEST tests/qapi-schema/flat-union-incomplete-branch.out
TEST tests/qapi-schema/flat-union-inline.out
TEST tests/qapi-schema/flat-union-int-branch.out
TEST tests/qapi-schema/flat-union-invalid-branch-key.out
TEST tests/qapi-schema/flat-union-invalid-discriminator.out
TEST tests/qapi-schema/flat-union-no-base.out
TEST tests/qapi-schema/flat-union-optional-discriminator.out
TEST tests/qapi-schema/funny-char.out
TEST tests/qapi-schema/flat-union-string-discriminator.out
TEST tests/qapi-schema/ident-with-escape.out
TEST tests/qapi-schema/include-before-err.out
TEST tests/qapi-schema/include-cycle.out
TEST tests/qapi-schema/include-extra-junk.out
TEST tests/qapi-schema/include-format-err.out
TEST tests/qapi-schema/include-nested-err.out
TEST tests/qapi-schema/include-no-file.out
TEST tests/qapi-schema/include-non-file.out
TEST tests/qapi-schema/include-relpath.out
TEST tests/qapi-schema/include-repetition.out
TEST tests/qapi-schema/include-self-cycle.out
TEST tests/qapi-schema/indented-expr.out
TEST tests/qapi-schema/include-simple.out
TEST tests/qapi-schema/leading-comma-list.out
TEST tests/qapi-schema/leading-comma-object.out
TEST tests/qapi-schema/missing-colon.out
TEST tests/qapi-schema/missing-comma-list.out
TEST tests/qapi-schema/missing-comma-object.out
TEST tests/qapi-schema/missing-type.out
TEST tests/qapi-schema/nested-struct-data.out
TEST tests/qapi-schema/non-objects.out
TEST tests/qapi-schema/pragma-doc-required-crap.out
TEST tests/qapi-schema/pragma-extra-junk.out
TEST tests/qapi-schema/pragma-name-case-whitelist-crap.out
TEST tests/qapi-schema/pragma-non-dict.out
TEST tests/qapi-schema/qapi-schema-test.out
TEST tests/qapi-schema/pragma-returns-whitelist-crap.out
TEST tests/qapi-schema/quoted-structural-chars.out
TEST tests/qapi-schema/redefined-builtin.out
TEST tests/qapi-schema/redefined-command.out
TEST tests/qapi-schema/redefined-event.out
TEST tests/qapi-schema/redefined-type.out
TEST tests/qapi-schema/reserved-command-q.out
TEST tests/qapi-schema/reserved-enum-q.out
TEST tests/qapi-schema/reserved-member-has.out
TEST tests/qapi-schema/reserved-member-q.out
TEST tests/qapi-schema/reserved-member-u.out
TEST tests/qapi-schema/reserved-member-underscore.out
TEST tests/qapi-schema/reserved-type-kind.out
TEST tests/qapi-schema/reserved-type-list.out
TEST tests/qapi-schema/returns-alternate.out
TEST tests/qapi-schema/returns-array-bad.out
TEST tests/qapi-schema/returns-dict.out
TEST tests/qapi-schema/returns-unknown.out
TEST tests/qapi-schema/returns-whitelist.out
TEST tests/qapi-schema/struct-base-clash-deep.out
TEST tests/qapi-schema/struct-base-clash.out
TEST tests/qapi-schema/struct-data-invalid.out
TEST tests/qapi-schema/trailing-comma-list.out
TEST tests/qapi-schema/struct-member-invalid.out
TEST tests/qapi-schema/trailing-comma-object.out
TEST tests/qapi-schema/type-bypass-bad-gen.out
TEST tests/qapi-schema/unclosed-list.out
TEST tests/qapi-schema/unclosed-object.out
TEST tests/qapi-schema/unclosed-string.out
TEST tests/qapi-schema/unicode-str.out
TEST tests/qapi-schema/union-base-empty.out
TEST tests/qapi-schema/union-base-no-discriminator.out
TEST tests/qapi-schema/union-branch-case.out
TEST tests/qapi-schema/union-clash-branches.out
TEST tests/qapi-schema/union-empty.out
TEST tests/qapi-schema/union-invalid-base.out
TEST tests/qapi-schema/union-optional-branch.out
TEST tests/qapi-schema/union-unknown.out
TEST tests/qapi-schema/unknown-escape.out
TEST tests/qapi-schema/unknown-expr-key.out
GEN tests/qapi-schema/doc-good.test.texi
CC tests/check-qdict.o
CC tests/test-char.o
CC tests/check-qfloat.o
CC tests/check-qint.o
CC tests/check-qstring.o
CC tests/check-qlist.o
CC tests/check-qnull.o
CC tests/check-qjson.o
CC tests/test-qobject-output-visitor.o
GEN tests/test-qapi-types.c
GEN tests/test-qapi-visit.c
GEN tests/test-qapi-event.c
GEN tests/test-qmp-introspect.c
CC tests/test-clone-visitor.o
CC tests/test-qobject-input-visitor.o
CC tests/test-qmp-commands.o
GEN tests/test-qmp-marshal.c
CC tests/test-string-input-visitor.o
CC tests/test-string-output-visitor.o
CC tests/test-qmp-event.o
CC tests/test-opts-visitor.o
CC tests/test-coroutine.o
CC tests/iothread.o
CC tests/test-visitor-serialization.o
CC tests/test-iov.o
CC tests/test-aio.o
CC tests/test-aio-multithread.o
CC tests/test-throttle.o
CC tests/test-thread-pool.o
CC tests/test-hbitmap.o
CC tests/test-blockjob.o
CC tests/test-blockjob-txn.o
CC tests/test-x86-cpuid.o
CC tests/test-xbzrle.o
CC tests/test-vmstate.o
CC tests/test-cutils.o
CC tests/test-shift128.o
CC tests/test-int128.o
CC tests/test-mul64.o
CC tests/rcutorture.o
CC tests/test-rcu-list.o
CC tests/test-qdist.o
/tmp/qemu-test/src/tests/test-int128.c:180: warning: ‘__noclone__’ attribute directive ignored
CC tests/test-qht.o
CC tests/test-qht-par.o
CC tests/qht-bench.o
CC tests/test-bitcnt.o
CC tests/check-qom-interface.o
CC tests/test-bitops.o
CC tests/check-qom-proplist.o
CC tests/test-qemu-opts.o
CC tests/test-keyval.o
CC tests/test-write-threshold.o
CC tests/test-crypto-hmac.o
CC tests/test-crypto-hash.o
CC tests/test-crypto-secret.o
CC tests/test-crypto-cipher.o
CC tests/test-qga.o
CC tests/libqtest.o
CC tests/test-timed-average.o
CC tests/test-io-task.o
CC tests/io-channel-helpers.o
CC tests/test-io-channel-socket.o
CC tests/test-io-channel-file.o
CC tests/test-io-channel-command.o
CC tests/test-io-channel-buffer.o
CC tests/test-base64.o
CC tests/test-crypto-ivgen.o
CC tests/test-crypto-block.o
CC tests/test-crypto-afsplit.o
CC tests/test-crypto-xts.o
CC tests/test-uuid.o
CC tests/test-bufferiszero.o
CC tests/ptimer-test.o
CC tests/test-logging.o
CC tests/test-replication.o
CC tests/ptimer-test-stubs.o
CC tests/vhost-user-test.o
CC tests/test-qapi-util.o
CC tests/libqos/fw_cfg.o
CC tests/libqos/pci.o
CC tests/libqos/malloc.o
CC tests/libqos/libqos.o
CC tests/libqos/i2c.o
CC tests/libqos/malloc-spapr.o
CC tests/libqos/libqos-spapr.o
CC tests/libqos/rtas.o
CC tests/libqos/pci-spapr.o
CC tests/libqos/pci-pc.o
CC tests/libqos/malloc-pc.o
CC tests/libqos/libqos-pc.o
CC tests/libqos/ahci.o
CC tests/libqos/virtio.o
CC tests/libqos/virtio-pci.o
CC tests/libqos/virtio-mmio.o
CC tests/libqos/malloc-generic.o
CC tests/endianness-test.o
CC tests/fdc-test.o
CC tests/ide-test.o
CC tests/ahci-test.o
CC tests/hd-geo-test.o
CC tests/boot-order-test.o
CC tests/bios-tables-test.o
CC tests/boot-sector.o
CC tests/acpi-utils.o
CC tests/rtc-test.o
CC tests/pxe-test.o
CC tests/boot-serial-test.o
CC tests/ipmi-kcs-test.o
CC tests/ipmi-bt-test.o
CC tests/i440fx-test.o
CC tests/fw_cfg-test.o
CC tests/drive_del-test.o
CC tests/wdt_ib700-test.o
CC tests/tco-test.o
CC tests/e1000-test.o
CC tests/e1000e-test.o
CC tests/pcnet-test.o
CC tests/rtl8139-test.o
/tmp/qemu-test/src/tests/ide-test.c: In function ‘cdrom_pio_impl’:
/tmp/qemu-test/src/tests/ide-test.c:803: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
/tmp/qemu-test/src/tests/ide-test.c: In function ‘test_cdrom_dma’:
/tmp/qemu-test/src/tests/ide-test.c:899: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
CC tests/eepro100-test.o
CC tests/ne2000-test.o
CC tests/nvme-test.o
CC tests/ac97-test.o
CC tests/es1370-test.o
CC tests/virtio-net-test.o
CC tests/virtio-balloon-test.o
CC tests/virtio-blk-test.o
CC tests/virtio-rng-test.o
CC tests/virtio-scsi-test.o
CC tests/virtio-serial-test.o
CC tests/virtio-console-test.o
CC tests/tpci200-test.o
CC tests/ipoctal232-test.o
CC tests/display-vga-test.o
CC tests/intel-hda-test.o
CC tests/ivshmem-test.o
CC tests/vmxnet3-test.o
CC tests/pvpanic-test.o
CC tests/i82801b11-test.o
CC tests/ioh3420-test.o
CC tests/usb-hcd-ohci-test.o
CC tests/libqos/usb.o
CC tests/usb-hcd-uhci-test.o
CC tests/usb-hcd-ehci-test.o
CC tests/usb-hcd-xhci-test.o
CC tests/pc-cpu-test.o
CC tests/q35-test.o
CC tests/test-netfilter.o
CC tests/test-filter-mirror.o
CC tests/test-filter-redirector.o
CC tests/postcopy-test.o
CC tests/test-x86-cpuid-compat.o
CC tests/qmp-test.o
CC tests/device-introspect-test.o
CC tests/qom-test.o
CC tests/test-hmp.o
LINK tests/check-qdict
LINK tests/test-char
LINK tests/check-qfloat
LINK tests/check-qint
LINK tests/check-qstring
LINK tests/check-qlist
LINK tests/check-qnull
LINK tests/check-qjson
CC tests/test-qapi-visit.o
CC tests/test-qapi-types.o
CC tests/test-qapi-event.o
CC tests/test-qmp-introspect.o
CC tests/test-qmp-marshal.o
LINK tests/test-coroutine
LINK tests/test-iov
LINK tests/test-aio
LINK tests/test-aio-multithread
LINK tests/test-throttle
LINK tests/test-thread-pool
LINK tests/test-hbitmap
LINK tests/test-blockjob
LINK tests/test-blockjob-txn
LINK tests/test-x86-cpuid
LINK tests/test-xbzrle
LINK tests/test-vmstate
LINK tests/test-cutils
LINK tests/test-shift128
LINK tests/test-mul64
LINK tests/test-int128
LINK tests/rcutorture
LINK tests/test-rcu-list
LINK tests/test-qdist
LINK tests/test-qht
LINK tests/qht-bench
LINK tests/test-bitops
LINK tests/test-bitcnt
LINK tests/check-qom-interface
LINK tests/check-qom-proplist
LINK tests/test-qemu-opts
LINK tests/test-keyval
LINK tests/test-write-threshold
LINK tests/test-crypto-hash
LINK tests/test-crypto-hmac
LINK tests/test-crypto-cipher
LINK tests/test-crypto-secret
LINK tests/test-qga
LINK tests/test-timed-average
LINK tests/test-io-task
LINK tests/test-io-channel-socket
LINK tests/test-io-channel-file
LINK tests/test-io-channel-command
LINK tests/test-io-channel-buffer
LINK tests/test-base64
LINK tests/test-crypto-ivgen
LINK tests/test-crypto-afsplit
LINK tests/test-crypto-xts
LINK tests/test-crypto-block
LINK tests/test-logging
LINK tests/test-replication
LINK tests/test-bufferiszero
LINK tests/test-uuid
LINK tests/ptimer-test
LINK tests/test-qapi-util
LINK tests/vhost-user-test
LINK tests/endianness-test
LINK tests/fdc-test
LINK tests/ide-test
LINK tests/ahci-test
LINK tests/hd-geo-test
LINK tests/boot-order-test
LINK tests/bios-tables-test
LINK tests/boot-serial-test
LINK tests/pxe-test
LINK tests/rtc-test
LINK tests/ipmi-kcs-test
LINK tests/ipmi-bt-test
LINK tests/i440fx-test
LINK tests/fw_cfg-test
LINK tests/drive_del-test
LINK tests/wdt_ib700-test
LINK tests/tco-test
LINK tests/e1000-test
LINK tests/e1000e-test
LINK tests/rtl8139-test
LINK tests/pcnet-test
LINK tests/eepro100-test
LINK tests/ne2000-test
LINK tests/nvme-test
LINK tests/ac97-test
LINK tests/es1370-test
LINK tests/virtio-net-test
LINK tests/virtio-balloon-test
LINK tests/virtio-blk-test
LINK tests/virtio-rng-test
LINK tests/virtio-scsi-test
LINK tests/virtio-serial-test
LINK tests/virtio-console-test
LINK tests/tpci200-test
LINK tests/ipoctal232-test
LINK tests/display-vga-test
LINK tests/intel-hda-test
LINK tests/ivshmem-test
LINK tests/vmxnet3-test
LINK tests/pvpanic-test
LINK tests/i82801b11-test
LINK tests/ioh3420-test
LINK tests/usb-hcd-ohci-test
LINK tests/usb-hcd-uhci-test
LINK tests/usb-hcd-ehci-test
LINK tests/usb-hcd-xhci-test
LINK tests/pc-cpu-test
LINK tests/q35-test
LINK tests/test-netfilter
LINK tests/test-filter-mirror
LINK tests/test-filter-redirector
LINK tests/postcopy-test
LINK tests/test-x86-cpuid-compat
LINK tests/qmp-test
LINK tests/device-introspect-test
LINK tests/qom-test
LINK tests/test-hmp
GTESTER tests/check-qdict
GTESTER tests/test-char
GTESTER tests/check-qfloat
GTESTER tests/check-qint
GTESTER tests/check-qlist
GTESTER tests/check-qstring
GTESTER tests/check-qnull
GTESTER tests/check-qjson
LINK tests/test-qobject-output-visitor
LINK tests/test-clone-visitor
LINK tests/test-qobject-input-visitor
LINK tests/test-qmp-commands
LINK tests/test-string-input-visitor
LINK tests/test-string-output-visitor
LINK tests/test-qmp-event
LINK tests/test-opts-visitor
GTESTER tests/test-coroutine
LINK tests/test-visitor-serialization
GTESTER tests/test-iov
GTESTER tests/test-aio
GTESTER tests/test-aio-multithread
GTESTER tests/test-throttle
GTESTER tests/test-thread-pool
GTESTER tests/test-hbitmap
GTESTER tests/test-blockjob
GTESTER tests/test-blockjob-txn
GTESTER tests/test-x86-cpuid
GTESTER tests/test-xbzrle
GTESTER tests/test-vmstate
Failed to load simple/primitive:b_1
Failed to load simple/primitive:i64_2
Failed to load simple/primitive:i32_1
Failed to load simple/primitive:i32_1
Failed to load test/with_tmp:a
Failed to load test/tmp_child_parent:f
Failed to load test/tmp_child:parent
Failed to load test/with_tmp:tmp
Failed to load test/tmp_child:diff
Failed to load test/with_tmp:tmp
Failed to load test/tmp_child:diff
Failed to load test/with_tmp:tmp
GTESTER tests/test-cutils
GTESTER tests/test-shift128
GTESTER tests/test-mul64
GTESTER tests/test-int128
GTESTER tests/rcutorture
GTESTER tests/test-rcu-list
GTESTER tests/test-qdist
GTESTER tests/test-qht
LINK tests/test-qht-par
GTESTER tests/test-bitops
GTESTER tests/test-bitcnt
GTESTER tests/check-qom-interface
GTESTER tests/check-qom-proplist
GTESTER tests/test-qemu-opts
GTESTER tests/test-keyval
GTESTER tests/test-write-threshold
GTESTER tests/test-crypto-hash
GTESTER tests/test-crypto-hmac
GTESTER tests/test-crypto-cipher
GTESTER tests/test-crypto-secret
GTESTER tests/test-qga
GTESTER tests/test-timed-average
GTESTER tests/test-io-task
GTESTER tests/test-io-channel-socket
GTESTER tests/test-io-channel-file
GTESTER tests/test-io-channel-command
GTESTER tests/test-io-channel-buffer
GTESTER tests/test-base64
GTESTER tests/test-crypto-ivgen
GTESTER tests/test-crypto-afsplit
GTESTER tests/test-crypto-xts
GTESTER tests/test-crypto-block
GTESTER tests/test-logging
GTESTER tests/test-replication
GTESTER tests/test-bufferiszero
GTESTER tests/test-uuid
GTESTER tests/ptimer-test
GTESTER tests/test-qapi-util
GTESTER check-qtest-x86_64
GTESTER check-qtest-aarch64
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Resource temporarily unavailable (11)
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 1 ring restore failed: -1: Resource temporarily unavailable (11)
GTESTER tests/test-qobject-output-visitor
GTESTER tests/test-clone-visitor
GTESTER tests/test-qobject-input-visitor
GTESTER tests/test-qmp-commands
GTESTER tests/test-string-output-visitor
GTESTER tests/test-string-input-visitor
GTESTER tests/test-qmp-event
GTESTER tests/test-opts-visitor
GTESTER tests/test-visitor-serialization
GTESTER tests/test-qht-par
**
ERROR:/tmp/qemu-test/src/tests/vhost-user-test.c:196:wait_for_fds: assertion failed: (s->fds_num)
GTester: last random seed: R02Sb1c3d996a9caf5abce0d6440075926af
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-9tacbi6p/src'
real 15m8.809s
user 0m4.187s
sys 0m1.301s
BUILD fedora
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-9tacbi6p/src'
ARCHIVE qemu.tgz
ARCHIVE dtc.tgz
COPY RUNNER
RUN test-mingw in qemu:fedora
Packages installed:
PyYAML-3.11-13.fc25.x86_64
SDL-devel-1.2.15-21.fc24.x86_64
bc-1.06.95-16.fc24.x86_64
bison-3.0.4-4.fc24.x86_64
ccache-3.3.4-1.fc25.x86_64
clang-3.9.1-2.fc25.x86_64
findutils-4.6.0-8.fc25.x86_64
flex-2.6.0-3.fc25.x86_64
gcc-6.3.1-1.fc25.x86_64
gcc-c++-6.3.1-1.fc25.x86_64
git-2.9.3-2.fc25.x86_64
glib2-devel-2.50.3-1.fc25.x86_64
libfdt-devel-1.4.2-1.fc25.x86_64
make-4.1-5.fc24.x86_64
mingw32-SDL-1.2.15-7.fc24.noarch
mingw32-bzip2-1.0.6-7.fc24.noarch
mingw32-curl-7.47.0-1.fc24.noarch
mingw32-glib2-2.50.1-1.fc25.noarch
mingw32-gmp-6.1.1-1.fc25.noarch
mingw32-gnutls-3.5.5-2.fc25.noarch
mingw32-gtk2-2.24.31-2.fc25.noarch
mingw32-gtk3-3.22.2-1.fc25.noarch
mingw32-libjpeg-turbo-1.5.1-1.fc25.noarch
mingw32-libpng-1.6.27-1.fc25.noarch
mingw32-libssh2-1.4.3-5.fc24.noarch
mingw32-libtasn1-4.9-1.fc25.noarch
mingw32-nettle-3.3-1.fc25.noarch
mingw32-pixman-0.34.0-1.fc25.noarch
mingw32-pkg-config-0.28-6.fc24.x86_64
mingw64-SDL-1.2.15-7.fc24.noarch
mingw64-bzip2-1.0.6-7.fc24.noarch
mingw64-curl-7.47.0-1.fc24.noarch
mingw64-glib2-2.50.1-1.fc25.noarch
mingw64-gmp-6.1.1-1.fc25.noarch
mingw64-gnutls-3.5.5-2.fc25.noarch
mingw64-gtk2-2.24.31-2.fc25.noarch
mingw64-gtk3-3.22.2-1.fc25.noarch
mingw64-libjpeg-turbo-1.5.1-1.fc25.noarch
mingw64-libpng-1.6.27-1.fc25.noarch
mingw64-libssh2-1.4.3-5.fc24.noarch
mingw64-libtasn1-4.9-1.fc25.noarch
mingw64-nettle-3.3-1.fc25.noarch
mingw64-pixman-0.34.0-1.fc25.noarch
mingw64-pkg-config-0.28-6.fc24.x86_64
package python2 is not installed
perl-5.24.1-385.fc25.x86_64
pixman-devel-0.34.0-2.fc24.x86_64
sparse-0.5.0-10.fc25.x86_64
tar-1.29-3.fc25.x86_64
which-2.21-1.fc25.x86_64
zlib-devel-1.2.8-10.fc24.x86_64
Environment variables:
FBR=f25
PACKAGES=ccache git tar PyYAML sparse flex bison python2 glib2-devel pixman-devel zlib-devel SDL-devel libfdt-devel gcc gcc-c++ clang make perl which bc findutils mingw32-pixman mingw32-glib2 mingw32-gmp mingw32-SDL mingw32-pkg-config mingw32-gtk2 mingw32-gtk3 mingw32-gnutls mingw32-nettle mingw32-libtasn1 mingw32-libjpeg-turbo mingw32-libpng mingw32-curl mingw32-libssh2 mingw32-bzip2 mingw64-pixman mingw64-glib2 mingw64-gmp mingw64-SDL mingw64-pkg-config mingw64-gtk2 mingw64-gtk3 mingw64-gnutls mingw64-nettle mingw64-libtasn1 mingw64-libjpeg-turbo mingw64-libpng mingw64-curl mingw64-libssh2 mingw64-bzip2
HOSTNAME=
TERM=xterm
MAKEFLAGS= -j8
HISTSIZE=1000
J=8
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
MAIL=/var/spool/mail/root
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
TARGET_LIST=
HISTCONTROL=ignoredups
FGC=f25
SHLVL=1
HOME=/root
TEST_DIR=/tmp/qemu-test
DISTTAG=f25docker
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
FEATURES=mingw clang pyyaml dtc
DEBUG=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/var/tmp/qemu-build/install --cross-prefix=x86_64-w64-mingw32- --enable-trace-backends=simple --enable-debug --enable-gnutls --enable-nettle --enable-curl --enable-vnc --enable-bzip2 --enable-guest-agent --with-sdlabi=1.2 --with-gtkabi=2.0
Install prefix /var/tmp/qemu-build/install
BIOS directory /var/tmp/qemu-build/install
binary directory /var/tmp/qemu-build/install
library directory /var/tmp/qemu-build/install/lib
module directory /var/tmp/qemu-build/install/lib
libexec directory /var/tmp/qemu-build/install/libexec
include directory /var/tmp/qemu-build/install/include
config directory /var/tmp/qemu-build/install
local state directory queried at runtime
Windows SDK no
Source path /tmp/qemu-test/src
C compiler x86_64-w64-mingw32-gcc
Host C compiler cc
C++ compiler x86_64-w64-mingw32-g++
Objective-C compiler clang
ARFLAGS rv
CFLAGS -g
QEMU_CFLAGS -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -Werror -mms-bitfields -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/glib-2.0 -I/usr/x86_64-w64-mingw32/sys-root/mingw/lib/glib-2.0/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -m64 -mcx16 -mthreads -D__USE_MINGW_ANSI_STDIO=1 -DWIN32_LEAN_AND_MEAN -DWINVER=0x501 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/p11-kit-1 -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/libpng16
LDFLAGS -Wl,--nxcompat -Wl,--no-seh -Wl,--dynamicbase -Wl,--warn-common -m64 -g
make make
install install
python python -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
tcg debug enabled yes
gprof enabled no
sparse enabled no
strip binaries no
profiler no
static build no
pixman system
SDL support yes (1.2.15)
GTK support yes (2.24.31)
GTK GL support no
VTE support no
TLS priority NORMAL
GNUTLS support yes
GNUTLS rnd yes
libgcrypt no
libgcrypt kdf no
nettle yes (3.3)
nettle kdf yes
libtasn1 yes
curses support no
virgl support no
curl support yes
mingw32 support yes
Audio drivers dsound
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
VNC support yes
VNC SASL support no
VNC JPEG support yes
VNC PNG support yes
xen support no
brlapi support no
bluez support no
Documentation no
PIE no
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support no
Install blobs yes
KVM support no
HAX support yes
RDMA support no
TCG interpreter no
fdt support yes
preadv support no
fdatasync no
madvise no
posix_madvise no
libcap-ng support no
vhost-net support no
vhost-scsi support no
vhost-vsock support no
Trace backends simple
Trace output file trace-<pid>
spice support no
rbd support no
xfsctl support no
smartcard support no
libusb no
usb net redir no
OpenGL support no
OpenGL dmabufs no
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info yes
QGA MSI support no
seccomp support no
coroutine backend win32
coroutine pool yes
debug stack usage no
GlusterFS support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support yes
TPM passthrough no
QOM debugging yes
lzo support no
snappy support no
bzip2 support yes
NUMA host support no
tcmalloc support no
jemalloc support no
avx2 optimization yes
replication support yes
VxHS block device no
mkdir -p dtc/libfdt
mkdir -p dtc/tests
GEN aarch64-softmmu/config-devices.mak.tmp
GEN x86_64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qemu-options.def
GEN qmp-commands.h
GEN qapi-types.h
GEN qapi-visit.h
GEN qapi-event.h
GEN x86_64-softmmu/config-devices.mak
GEN qmp-marshal.c
GEN aarch64-softmmu/config-devices.mak
GEN qapi-types.c
GEN qapi-visit.c
GEN qapi-event.c
GEN qmp-introspect.h
GEN qmp-introspect.c
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
GEN trace/generated-helpers.c
GEN module_block.h
GEN tests/test-qapi-types.h
GEN tests/test-qapi-visit.h
GEN tests/test-qmp-commands.h
GEN tests/test-qapi-event.h
GEN tests/test-qmp-introspect.h
GEN trace-root.h
GEN util/trace.h
GEN crypto/trace.h
GEN io/trace.h
GEN migration/trace.h
GEN block/trace.h
GEN backends/trace.h
GEN hw/block/trace.h
GEN hw/block/dataplane/trace.h
GEN hw/char/trace.h
GEN hw/intc/trace.h
GEN hw/net/trace.h
GEN hw/virtio/trace.h
GEN hw/audio/trace.h
GEN hw/misc/trace.h
GEN hw/usb/trace.h
GEN hw/scsi/trace.h
GEN hw/nvram/trace.h
GEN hw/display/trace.h
GEN hw/input/trace.h
GEN hw/timer/trace.h
GEN hw/dma/trace.h
GEN hw/sparc/trace.h
GEN hw/sd/trace.h
GEN hw/isa/trace.h
GEN hw/mem/trace.h
GEN hw/i386/trace.h
GEN hw/i386/xen/trace.h
GEN hw/9pfs/trace.h
GEN hw/ppc/trace.h
GEN hw/pci/trace.h
GEN hw/s390x/trace.h
GEN hw/vfio/trace.h
GEN hw/acpi/trace.h
GEN hw/arm/trace.h
GEN hw/alpha/trace.h
GEN hw/xen/trace.h
GEN ui/trace.h
GEN audio/trace.h
GEN net/trace.h
GEN target/arm/trace.h
GEN target/i386/trace.h
GEN target/mips/trace.h
GEN target/sparc/trace.h
GEN target/s390x/trace.h
GEN target/ppc/trace.h
GEN qom/trace.h
GEN linux-user/trace.h
GEN qapi/trace.h
GEN trace-root.c
GEN util/trace.c
GEN crypto/trace.c
GEN io/trace.c
GEN migration/trace.c
GEN block/trace.c
GEN backends/trace.c
GEN hw/block/trace.c
GEN hw/block/dataplane/trace.c
GEN hw/char/trace.c
GEN hw/intc/trace.c
GEN hw/net/trace.c
GEN hw/virtio/trace.c
GEN hw/audio/trace.c
GEN hw/misc/trace.c
GEN hw/usb/trace.c
GEN hw/scsi/trace.c
GEN hw/nvram/trace.c
GEN hw/display/trace.c
GEN hw/input/trace.c
GEN hw/timer/trace.c
GEN hw/dma/trace.c
GEN hw/sparc/trace.c
GEN hw/sd/trace.c
GEN hw/isa/trace.c
GEN hw/mem/trace.c
GEN hw/i386/trace.c
GEN hw/i386/xen/trace.c
GEN hw/9pfs/trace.c
GEN hw/ppc/trace.c
GEN hw/pci/trace.c
GEN hw/s390x/trace.c
GEN hw/vfio/trace.c
GEN hw/acpi/trace.c
GEN hw/arm/trace.c
GEN hw/alpha/trace.c
GEN hw/xen/trace.c
GEN ui/trace.c
GEN audio/trace.c
GEN net/trace.c
GEN target/arm/trace.c
GEN target/i386/trace.c
GEN target/mips/trace.c
GEN target/sparc/trace.c
GEN target/s390x/trace.c
GEN target/ppc/trace.c
GEN qom/trace.c
GEN linux-user/trace.c
GEN qapi/trace.c
GEN config-all-devices.mak
DEP /tmp/qemu-test/src/dtc/tests/dumptrees.c
DEP /tmp/qemu-test/src/dtc/tests/testutils.c
DEP /tmp/qemu-test/src/dtc/tests/trees.S
DEP /tmp/qemu-test/src/dtc/tests/value-labels.c
DEP /tmp/qemu-test/src/dtc/tests/asm_tree_dump.c
DEP /tmp/qemu-test/src/dtc/tests/truncated_property.c
DEP /tmp/qemu-test/src/dtc/tests/check_path.c
DEP /tmp/qemu-test/src/dtc/tests/overlay_bad_fixup.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/overlay.c
DEP /tmp/qemu-test/src/dtc/tests/property_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/utilfdt_test.c
DEP /tmp/qemu-test/src/dtc/tests/integer-expressions.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset_aliases.c
DEP /tmp/qemu-test/src/dtc/tests/add_subnode_with_nops.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_unordered.c
DEP /tmp/qemu-test/src/dtc/tests/dtb_reverse.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_ordered.c
DEP /tmp/qemu-test/src/dtc/tests/extra-terminating-null.c
DEP /tmp/qemu-test/src/dtc/tests/incbin.c
DEP /tmp/qemu-test/src/dtc/tests/boot-cpuid.c
DEP /tmp/qemu-test/src/dtc/tests/phandle_format.c
DEP /tmp/qemu-test/src/dtc/tests/path-references.c
DEP /tmp/qemu-test/src/dtc/tests/references.c
DEP /tmp/qemu-test/src/dtc/tests/string_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/propname_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop2.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop1.c
DEP /tmp/qemu-test/src/dtc/tests/del_node.c
DEP /tmp/qemu-test/src/dtc/tests/del_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop.c
DEP /tmp/qemu-test/src/dtc/tests/set_name.c
DEP /tmp/qemu-test/src/dtc/tests/rw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/open_pack.c
DEP /tmp/qemu-test/src/dtc/tests/nopulate.c
DEP /tmp/qemu-test/src/dtc/tests/mangle-layout.c
DEP /tmp/qemu-test/src/dtc/tests/move_and_save.c
DEP /tmp/qemu-test/src/dtc/tests/sw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/nop_node.c
DEP /tmp/qemu-test/src/dtc/tests/nop_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop_inplace.c
DEP /tmp/qemu-test/src/dtc/tests/stringlist.c
DEP /tmp/qemu-test/src/dtc/tests/addr_size_cells.c
DEP /tmp/qemu-test/src/dtc/tests/notfound.c
DEP /tmp/qemu-test/src/dtc/tests/sized_cells.c
DEP /tmp/qemu-test/src/dtc/tests/char_literal.c
DEP /tmp/qemu-test/src/dtc/tests/get_alias.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_check_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_prop_value.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/parent_offset.c
DEP /tmp/qemu-test/src/dtc/tests/supernode_atdepth_offset.c
DEP /tmp/qemu-test/src/dtc/tests/get_path.c
DEP /tmp/qemu-test/src/dtc/tests/get_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/getprop.c
DEP /tmp/qemu-test/src/dtc/tests/get_name.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_offset.c
DEP /tmp/qemu-test/src/dtc/tests/find_property.c
DEP /tmp/qemu-test/src/dtc/tests/root_node.c
DEP /tmp/qemu-test/src/dtc/tests/get_mem_rsv.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_overlay.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_addresses.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_empty_tree.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_strerror.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_rw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_sw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_wip.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_ro.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt.c
DEP /tmp/qemu-test/src/dtc/util.c
DEP /tmp/qemu-test/src/dtc/fdtput.c
DEP /tmp/qemu-test/src/dtc/fdtget.c
LEX convert-dtsv0-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/fdtdump.c
DEP /tmp/qemu-test/src/dtc/srcpos.c
BISON dtc-parser.tab.c
LEX dtc-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/treesource.c
DEP /tmp/qemu-test/src/dtc/livetree.c
DEP /tmp/qemu-test/src/dtc/fstree.c
DEP /tmp/qemu-test/src/dtc/flattree.c
DEP /tmp/qemu-test/src/dtc/dtc.c
DEP /tmp/qemu-test/src/dtc/data.c
DEP /tmp/qemu-test/src/dtc/checks.c
DEP convert-dtsv0-lexer.lex.c
DEP dtc-parser.tab.c
DEP dtc-lexer.lex.c
CHK version_gen.h
UPD version_gen.h
DEP /tmp/qemu-test/src/dtc/util.c
CC libfdt/fdt.o
CC libfdt/fdt_sw.o
CC libfdt/fdt_wip.o
CC libfdt/fdt_ro.o
CC libfdt/fdt_strerror.o
CC libfdt/fdt_empty_tree.o
CC libfdt/fdt_rw.o
CC libfdt/fdt_addresses.o
CC libfdt/fdt_overlay.o
AR libfdt/libfdt.a
x86_64-w64-mingw32-ar: creating libfdt/libfdt.a
a - libfdt/fdt.o
a - libfdt/fdt_ro.o
a - libfdt/fdt_wip.o
a - libfdt/fdt_sw.o
a - libfdt/fdt_rw.o
a - libfdt/fdt_strerror.o
a - libfdt/fdt_empty_tree.o
a - libfdt/fdt_addresses.o
a - libfdt/fdt_overlay.o
RC version.o
GEN qga/qapi-generated/qga-qapi-types.h
GEN qga/qapi-generated/qga-qapi-visit.h
GEN qga/qapi-generated/qga-qmp-commands.h
GEN qga/qapi-generated/qga-qapi-visit.c
GEN qga/qapi-generated/qga-qapi-types.c
CC qapi-types.o
GEN qga/qapi-generated/qga-qmp-marshal.c
CC qmp-introspect.o
CC qapi-visit.o
CC qapi-event.o
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qmp-dispatch.o
CC qapi/qmp-registry.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qint.o
CC qobject/qstring.o
CC qobject/qlist.o
CC qobject/qdict.o
CC qobject/qfloat.o
CC qobject/qbool.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
CC trace/simple.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/bufferiszero.o
CC util/lockcnt.o
CC util/aiocb.o
CC util/async.o
CC util/thread-pool.o
CC util/qemu-timer.o
CC util/main-loop.o
CC util/iohandler.o
CC util/aio-win32.o
CC util/event_notifier-win32.o
CC util/oslib-win32.o
CC util/qemu-thread-win32.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/host-utils.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/uri.o
CC util/notify.o
CC util/qemu-option.o
CC util/qemu-progress.o
CC util/keyval.o
CC util/hexdump.o
CC util/crc32c.o
CC util/uuid.o
CC util/throttle.o
CC util/getauxval.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-win32.o
CC util/buffer.o
CC util/timed-average.o
CC util/base64.o
CC util/log.o
CC util/qdist.o
CC util/qht.o
CC util/range.o
CC util/systemd.o
CC trace-root.o
CC util/trace.o
CC crypto/trace.o
CC io/trace.o
CC migration/trace.o
CC block/trace.o
CC backends/trace.o
CC hw/block/trace.o
CC hw/block/dataplane/trace.o
CC hw/char/trace.o
CC hw/intc/trace.o
CC hw/net/trace.o
CC hw/virtio/trace.o
CC hw/audio/trace.o
CC hw/misc/trace.o
CC hw/usb/trace.o
CC hw/scsi/trace.o
CC hw/nvram/trace.o
CC hw/display/trace.o
CC hw/input/trace.o
CC hw/timer/trace.o
CC hw/dma/trace.o
CC hw/sparc/trace.o
CC hw/sd/trace.o
CC hw/isa/trace.o
CC hw/mem/trace.o
CC hw/i386/trace.o
CC hw/i386/xen/trace.o
CC hw/9pfs/trace.o
CC hw/ppc/trace.o
CC hw/pci/trace.o
CC hw/s390x/trace.o
CC hw/vfio/trace.o
CC hw/acpi/trace.o
CC hw/arm/trace.o
CC hw/alpha/trace.o
CC hw/xen/trace.o
CC ui/trace.o
CC audio/trace.o
CC net/trace.o
CC target/arm/trace.o
CC target/i386/trace.o
CC target/mips/trace.o
CC target/sparc/trace.o
CC target/s390x/trace.o
CC target/ppc/trace.o
CC qom/trace.o
CC linux-user/trace.o
CC qapi/trace.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/fdset.o
CC stubs/gdbstub.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/monitor.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/sysbus.o
CC stubs/slirp.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/fd-register.o
CC stubs/qmp_pc_dimm_device_list.o
CC stubs/target-monitor-defs.o
CC stubs/target-get-monitor-def.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/vmgenid.o
CC stubs/xen-common.o
CC stubs/xen-hvm.o
GEN qemu-img-cmds.h
CC blockjob.o
CC block.o
CC qemu-io-cmds.o
CC replication.o
CC block/raw-format.o
CC block/vdi.o
CC block/qcow.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qed.o
CC block/qed-gencb.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blkreplay.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/file-win32.o
CC block/win32-aio.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/throttle-groups.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/crypto.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC block/curl.o
CC block/ssh.o
CC block/dmg-bz2.o
CC crypto/init.o
CC crypto/hash.o
CC crypto/hash-nettle.o
CC crypto/hmac.o
CC crypto/hmac-nettle.o
CC crypto/aes.o
CC crypto/desrfb.o
CC crypto/cipher.o
CC crypto/tlscreds.o
CC crypto/tlscredsanon.o
CC crypto/tlscredsx509.o
CC crypto/tlssession.o
CC crypto/secret.o
CC crypto/random-gnutls.o
CC crypto/pbkdf.o
CC crypto/pbkdf-nettle.o
CC crypto/ivgen.o
CC crypto/ivgen-essiv.o
CC crypto/ivgen-plain.o
CC crypto/ivgen-plain64.o
CC crypto/afsplit.o
CC crypto/xts.o
CC crypto/block.o
CC crypto/block-qcow.o
CC crypto/block-luks.o
CC io/channel.o
CC io/channel-buffer.o
CC io/channel-command.o
CC io/channel-file.o
CC io/channel-socket.o
CC io/channel-tls.o
CC io/channel-watch.o
CC io/channel-websock.o
CC io/channel-util.o
CC io/dns-resolver.o
CC io/task.o
CC qom/object.o
CC qom/container.o
CC qom/qom-qobject.o
CC qom/object_interfaces.o
CC qemu-io.o
CC blockdev.o
CC blockdev-nbd.o
CC iothread.o
CC qdev-monitor.o
CC device-hotplug.o
CC os-win32.o
CC page_cache.o
CC accel.o
CC bt-host.o
CC bt-vhci.o
CC dma-helpers.o
CC vl.o
CC tpm.o
CC device_tree.o
CC qmp-marshal.o
CC qmp.o
CC hmp.o
CC cpus-common.o
CC audio/audio.o
CC audio/noaudio.o
CC audio/wavaudio.o
In file included from ^[[01m^[[K/tmp/qemu-test/src/include/hw/virtio/vhost-pci-slave.h:4:0^[[m^[[K,
from ^[[01m^[[K/tmp/qemu-test/src/vl.c:132^[[m^[[K:
^[[01m^[[K/tmp/qemu-test/src/linux-headers/linux/vhost.h:13:25:^[[m^[[K ^[[01;31m^[[Kfatal error: ^[[m^[[Klinux/types.h: No such file or directory
#include <linux/types.h>
^[[01;31m^[[K^^[[m^[[K
compilation terminated.
/tmp/qemu-test/src/rules.mak:69: recipe for target 'vl.o' failed
make: *** [vl.o] Error 1
make: *** Waiting for unfinished jobs....
tests/docker/Makefile.include:118: recipe for target 'docker-run' failed
make[1]: *** [docker-run] Error 2
make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-9tacbi6p/src'
tests/docker/Makefile.include:149: recipe for target 'docker-run-test-mingw@fedora' failed
make: *** [docker-run-test-mingw@fedora] Error 2
=== OUTPUT END ===
Test command exited with code: 2
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] [PATCH v2 06/16] virtio: add inter-vm notification support
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
@ 2017-05-15 0:21 ` Wei Wang
0 siblings, 0 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-15 0:21 UTC (permalink / raw)
To: stefanha, marcandre.lureau, mst, jasowang, pbonzini, virtio-dev,
qemu-devel
On 05/12/2017 04:35 PM, Wei Wang wrote:
> This patch enables the assign of an already allocated eventfd to a notifier.
> In this case, QEMU creates a new eventfd for the notifier only when the
> notifier's fd equals to -1. Otherwise, it means that the notifier has been
> assigned a vaild fd.
>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
>
+ Suggested-by Paolo Bonzini <pbonzini@redhat.com>
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
` (16 preceding siblings ...)
2017-05-12 9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
@ 2017-05-16 6:46 ` Jason Wang
2017-05-16 7:12 ` [Qemu-devel] [virtio-dev] " Wei Wang
17 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-16 6:46 UTC (permalink / raw)
To: Wei Wang, stefanha, marcandre.lureau, mst, pbonzini, virtio-dev,
qemu-devel
On 2017年05月12日 16:35, Wei Wang wrote:
> This patch series implements vhost-pci, which is a point-to-point based
> inter-vm communication solution. The QEMU side implementation includes the
> vhost-user extension, vhost-pci device emulation and management, and inter-VM
> notification.
>
> v1->v2 changes:
> 1) inter-VM notification support;
> 2) vhost-pci-net ctrlq message format change;
> 3) patch re-org and code cleanup.
>
> Wei Wang (16):
> vhost-user: share the vhost-user protocol related structures
> vl: add the vhost-pci-slave command line option
> vhost-pci-slave: create a vhost-user slave to support vhost-pci
> vhost-pci-net: add vhost-pci-net
> vhost-pci-net-pci: add vhost-pci-net-pci
> virtio: add inter-vm notification support
> vhost-user: send device id to the slave
> vhost-user: send guest physical address of virtqueues to the slave
> vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP
> vhost-pci-net: send the negotiated feature bits to the master
> vhost-user: add asynchronous read for the vhost-user master
> vhost-user: handling VHOST_USER_SET_FEATURES
> vhost-pci-slave: add "reset_virtio"
> vhost-pci-slave: add support to delete a vhost-pci device
> vhost-pci-net: tell the driver that it is ready to send packets
> vl: enable vhost-pci-slave
>
> hw/net/Makefile.objs | 2 +-
> hw/net/vhost-pci-net.c | 364 +++++++++++++
> hw/net/vhost_net.c | 39 ++
> hw/virtio/Makefile.objs | 7 +-
> hw/virtio/vhost-pci-slave.c | 676 +++++++++++++++++++++++++
> hw/virtio/vhost-stub.c | 22 +
> hw/virtio/vhost-user.c | 192 +++----
> hw/virtio/vhost.c | 63 ++-
> hw/virtio/virtio-bus.c | 19 +-
> hw/virtio/virtio-pci.c | 96 +++-
> hw/virtio/virtio-pci.h | 16 +
> hw/virtio/virtio.c | 32 +-
> include/hw/pci/pci.h | 1 +
> include/hw/virtio/vhost-backend.h | 2 +
> include/hw/virtio/vhost-pci-net.h | 40 ++
> include/hw/virtio/vhost-pci-slave.h | 64 +++
> include/hw/virtio/vhost-user.h | 110 ++++
> include/hw/virtio/vhost.h | 3 +
> include/hw/virtio/virtio.h | 2 +
> include/net/vhost-user.h | 22 +-
> include/net/vhost_net.h | 2 +
> include/standard-headers/linux/vhost_pci_net.h | 74 +++
> include/standard-headers/linux/virtio_ids.h | 1 +
> net/vhost-user.c | 37 +-
> qemu-options.hx | 4 +
> vl.c | 46 ++
> 26 files changed, 1796 insertions(+), 140 deletions(-)
> create mode 100644 hw/net/vhost-pci-net.c
> create mode 100644 hw/virtio/vhost-pci-slave.c
> create mode 100644 include/hw/virtio/vhost-pci-net.h
> create mode 100644 include/hw/virtio/vhost-pci-slave.h
> create mode 100644 include/hw/virtio/vhost-user.h
> create mode 100644 include/standard-headers/linux/vhost_pci_net.h
>
Hi:
Care to post the driver codes too?
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-16 6:46 ` Jason Wang
@ 2017-05-16 7:12 ` Wei Wang
2017-05-17 6:16 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-16 7:12 UTC (permalink / raw)
To: Jason Wang, stefanha, marcandre.lureau, mst, pbonzini,
virtio-dev, qemu-devel
On 05/16/2017 02:46 PM, Jason Wang wrote:
>
>
> On 2017年05月12日 16:35, Wei Wang wrote:
>> This patch series implements vhost-pci, which is a point-to-point based
>> inter-vm communication solution. The QEMU side implementation
>> includes the
>> vhost-user extension, vhost-pci device emulation and management, and
>> inter-VM
>> notification.
>>
>> v1->v2 changes:
>> 1) inter-VM notification support;
>> 2) vhost-pci-net ctrlq message format change;
>> 3) patch re-org and code cleanup.
>>
>> Wei Wang (16):
>> vhost-user: share the vhost-user protocol related structures
>> vl: add the vhost-pci-slave command line option
>> vhost-pci-slave: create a vhost-user slave to support vhost-pci
>> vhost-pci-net: add vhost-pci-net
>> vhost-pci-net-pci: add vhost-pci-net-pci
>> virtio: add inter-vm notification support
>> vhost-user: send device id to the slave
>> vhost-user: send guest physical address of virtqueues to the slave
>> vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP
>> vhost-pci-net: send the negotiated feature bits to the master
>> vhost-user: add asynchronous read for the vhost-user master
>> vhost-user: handling VHOST_USER_SET_FEATURES
>> vhost-pci-slave: add "reset_virtio"
>> vhost-pci-slave: add support to delete a vhost-pci device
>> vhost-pci-net: tell the driver that it is ready to send packets
>> vl: enable vhost-pci-slave
>>
>> hw/net/Makefile.objs | 2 +-
>> hw/net/vhost-pci-net.c | 364 +++++++++++++
>> hw/net/vhost_net.c | 39 ++
>> hw/virtio/Makefile.objs | 7 +-
>> hw/virtio/vhost-pci-slave.c | 676
>> +++++++++++++++++++++++++
>> hw/virtio/vhost-stub.c | 22 +
>> hw/virtio/vhost-user.c | 192 +++----
>> hw/virtio/vhost.c | 63 ++-
>> hw/virtio/virtio-bus.c | 19 +-
>> hw/virtio/virtio-pci.c | 96 +++-
>> hw/virtio/virtio-pci.h | 16 +
>> hw/virtio/virtio.c | 32 +-
>> include/hw/pci/pci.h | 1 +
>> include/hw/virtio/vhost-backend.h | 2 +
>> include/hw/virtio/vhost-pci-net.h | 40 ++
>> include/hw/virtio/vhost-pci-slave.h | 64 +++
>> include/hw/virtio/vhost-user.h | 110 ++++
>> include/hw/virtio/vhost.h | 3 +
>> include/hw/virtio/virtio.h | 2 +
>> include/net/vhost-user.h | 22 +-
>> include/net/vhost_net.h | 2 +
>> include/standard-headers/linux/vhost_pci_net.h | 74 +++
>> include/standard-headers/linux/virtio_ids.h | 1 +
>> net/vhost-user.c | 37 +-
>> qemu-options.hx | 4 +
>> vl.c | 46 ++
>> 26 files changed, 1796 insertions(+), 140 deletions(-)
>> create mode 100644 hw/net/vhost-pci-net.c
>> create mode 100644 hw/virtio/vhost-pci-slave.c
>> create mode 100644 include/hw/virtio/vhost-pci-net.h
>> create mode 100644 include/hw/virtio/vhost-pci-slave.h
>> create mode 100644 include/hw/virtio/vhost-user.h
>> create mode 100644 include/standard-headers/linux/vhost_pci_net.h
>>
>
> Hi:
>
> Care to post the driver codes too?
>
OK. It may take some time to clean up the driver code before post it
out. You can first
have a check of the draft at the repo here:
https://github.com/wei-w-wang/vhost-pci-driver
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-12 9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
@ 2017-05-16 15:21 ` Michael S. Tsirkin
0 siblings, 0 replies; 52+ messages in thread
From: Michael S. Tsirkin @ 2017-05-16 15:21 UTC (permalink / raw)
To: qemu-devel
Cc: wei.w.wang, famz, stefanha, marcandre.lureau, jasowang, pbonzini,
virtio-dev
On Fri, May 12, 2017 at 02:30:00AM -0700, no-reply@patchew.org wrote:
> In file included from ^[[01m^[[K/tmp/qemu-test/src/include/hw/virtio/vhost-pci-slave.h:4:0^[[m^[[K,
> from ^[[01m^[[K/tmp/qemu-test/src/vl.c:132^[[m^[[K:
> ^[[01m^[[K/tmp/qemu-test/src/linux-headers/linux/vhost.h:13:25:^[[m^[[K ^[[01;31m^[[Kfatal error: ^[[m^[[Klinux/types.h: No such file or directory
> #include <linux/types.h>
> ^[[01;31m^[[K^^[[m^[[K
> compilation terminated.
> /tmp/qemu-test/src/rules.mak:69: recipe for target 'vl.o' failed
> make: *** [vl.o] Error 1
> make: *** Waiting for unfinished jobs....
> tests/docker/Makefile.include:118: recipe for target 'docker-run' failed
> make[1]: *** [docker-run] Error 2
> make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-9tacbi6p/src'
> tests/docker/Makefile.include:149: recipe for target 'docker-run-test-mingw@fedora' failed
> make: *** [docker-run-test-mingw@fedora] Error 2
> === OUTPUT END ===
That's because you are
- pulling in linux-specific vhost.h which you shouldn't need to
- including vhost-pci-slave.h in vl.c which you shouldn't need to
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-16 7:12 ` [Qemu-devel] [virtio-dev] " Wei Wang
@ 2017-05-17 6:16 ` Jason Wang
2017-05-17 6:22 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-17 6:16 UTC (permalink / raw)
To: Wei Wang, stefanha, marcandre.lureau, mst, pbonzini, virtio-dev,
qemu-devel
On 2017年05月16日 15:12, Wei Wang wrote:
>>>
>>
>> Hi:
>>
>> Care to post the driver codes too?
>>
> OK. It may take some time to clean up the driver code before post it
> out. You can first
> have a check of the draft at the repo here:
> https://github.com/wei-w-wang/vhost-pci-driver
>
> Best,
> Wei
Interesting, looks like there's one copy on tx side. We used to have
zerocopy support for tun for VM2VM traffic. Could you please try to
compare it with your vhost-pci-net by:
- make sure zerocopy is enabled for vhost_net
- comment skb_orphan_frags() in tun_net_xmit()
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-17 6:16 ` Jason Wang
@ 2017-05-17 6:22 ` Jason Wang
2017-05-18 3:03 ` Wei Wang
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-17 6:22 UTC (permalink / raw)
To: Wei Wang, stefanha, marcandre.lureau, mst, pbonzini, virtio-dev,
qemu-devel
On 2017年05月17日 14:16, Jason Wang wrote:
>
>
> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>
>>>
>>> Hi:
>>>
>>> Care to post the driver codes too?
>>>
>> OK. It may take some time to clean up the driver code before post it
>> out. You can first
>> have a check of the draft at the repo here:
>> https://github.com/wei-w-wang/vhost-pci-driver
>>
>> Best,
>> Wei
>
> Interesting, looks like there's one copy on tx side. We used to have
> zerocopy support for tun for VM2VM traffic. Could you please try to
> compare it with your vhost-pci-net by:
>
> - make sure zerocopy is enabled for vhost_net
> - comment skb_orphan_frags() in tun_net_xmit()
>
> Thanks
>
You can even enable tx batching for tun by ethtool -C tap0 rx-frames N.
This will greatly improve the performance according to my test.
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-17 6:22 ` Jason Wang
@ 2017-05-18 3:03 ` Wei Wang
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-18 3:03 UTC (permalink / raw)
To: Jason Wang, stefanha, marcandre.lureau, mst, pbonzini,
virtio-dev, qemu-devel
On 05/17/2017 02:22 PM, Jason Wang wrote:
>
>
> On 2017年05月17日 14:16, Jason Wang wrote:
>>
>>
>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>
>>>>
>>>> Hi:
>>>>
>>>> Care to post the driver codes too?
>>>>
>>> OK. It may take some time to clean up the driver code before post it
>>> out. You can first
>>> have a check of the draft at the repo here:
>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>
>>> Best,
>>> Wei
>>
>> Interesting, looks like there's one copy on tx side. We used to have
>> zerocopy support for tun for VM2VM traffic. Could you please try to
>> compare it with your vhost-pci-net by:
>>
We can analyze from the whole data path - from VM1's network stack to
send packets -> VM2's
network stack to receive packets. The number of copies are actually the
same for both.
vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
from its network stack to VM2's
RX ring buffer. (we call it "zerocopy" because there is no intermediate
copy between VMs)
zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which copies
packets from VM1's TX ring
buffer to VM2's RX ring buffer.
That being said, we compared to vhost-user, instead of vhost_net,
because vhost-user is the one
that is used in NFV, which we think is a major use case for vhost-pci.
>> - make sure zerocopy is enabled for vhost_net
>> - comment skb_orphan_frags() in tun_net_xmit()
>>
>> Thanks
>>
>
> You can even enable tx batching for tun by ethtool -C tap0 rx-frames
> N. This will greatly improve the performance according to my test.
>
Thanks, but would this hurt latency?
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-18 3:03 ` Wei Wang
@ 2017-05-19 3:10 ` Jason Wang
2017-05-19 9:00 ` Wei Wang
` (2 more replies)
0 siblings, 3 replies; 52+ messages in thread
From: Jason Wang @ 2017-05-19 3:10 UTC (permalink / raw)
To: Wei Wang, stefanha, marcandre.lureau, mst, pbonzini, virtio-dev,
qemu-devel
On 2017年05月18日 11:03, Wei Wang wrote:
> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>
>>
>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>
>>>>>
>>>>> Hi:
>>>>>
>>>>> Care to post the driver codes too?
>>>>>
>>>> OK. It may take some time to clean up the driver code before post
>>>> it out. You can first
>>>> have a check of the draft at the repo here:
>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>
>>>> Best,
>>>> Wei
>>>
>>> Interesting, looks like there's one copy on tx side. We used to have
>>> zerocopy support for tun for VM2VM traffic. Could you please try to
>>> compare it with your vhost-pci-net by:
>>>
> We can analyze from the whole data path - from VM1's network stack to
> send packets -> VM2's
> network stack to receive packets. The number of copies are actually
> the same for both.
That's why I'm asking you to compare the performance. The only reason
for vhost-pci is performance. You should prove it.
>
> vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
> from its network stack to VM2's
> RX ring buffer. (we call it "zerocopy" because there is no
> intermediate copy between VMs)
> zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which
> copies packets from VM1's TX ring
> buffer to VM2's RX ring buffer.
Actually, there's a major difference here. You do copy in guest which
consumes time slice of vcpu thread on host. Vhost_net do this in its own
thread. So I feel vhost_net is even faster here, maybe I was wrong.
>
> That being said, we compared to vhost-user, instead of vhost_net,
> because vhost-user is the one
> that is used in NFV, which we think is a major use case for vhost-pci.
If this is true, why not draft a pmd driver instead of a kernel one? And
do you use virtio-net kernel driver to compare the performance? If yes,
has OVS dpdk optimized for kernel driver (I think not)?
What's more important, if vhost-pci is faster, I think its kernel driver
should be also faster than virtio-net, no?
>
>
>>> - make sure zerocopy is enabled for vhost_net
>>> - comment skb_orphan_frags() in tun_net_xmit()
>>>
>>> Thanks
>>>
>>
>> You can even enable tx batching for tun by ethtool -C tap0 rx-frames
>> N. This will greatly improve the performance according to my test.
>>
>
> Thanks, but would this hurt latency?
>
> Best,
> Wei
I don't see this in my test.
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
@ 2017-05-19 9:00 ` Wei Wang
2017-05-19 9:53 ` Jason Wang
2017-05-19 20:44 ` Michael S. Tsirkin
2017-05-19 15:33 ` Stefan Hajnoczi
2017-05-19 16:49 ` Michael S. Tsirkin
2 siblings, 2 replies; 52+ messages in thread
From: Wei Wang @ 2017-05-19 9:00 UTC (permalink / raw)
To: Jason Wang, stefanha, marcandre.lureau, mst, pbonzini,
virtio-dev, qemu-devel
On 05/19/2017 11:10 AM, Jason Wang wrote:
>
>
> On 2017年05月18日 11:03, Wei Wang wrote:
>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>
>>>>
>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>
>>>>>>
>>>>>> Hi:
>>>>>>
>>>>>> Care to post the driver codes too?
>>>>>>
>>>>> OK. It may take some time to clean up the driver code before post
>>>>> it out. You can first
>>>>> have a check of the draft at the repo here:
>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>
>>>>> Best,
>>>>> Wei
>>>>
>>>> Interesting, looks like there's one copy on tx side. We used to
>>>> have zerocopy support for tun for VM2VM traffic. Could you please
>>>> try to compare it with your vhost-pci-net by:
>>>>
>> We can analyze from the whole data path - from VM1's network stack to
>> send packets -> VM2's
>> network stack to receive packets. The number of copies are actually
>> the same for both.
>
> That's why I'm asking you to compare the performance. The only reason
> for vhost-pci is performance. You should prove it.
>
>>
>> vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
>> from its network stack to VM2's
>> RX ring buffer. (we call it "zerocopy" because there is no
>> intermediate copy between VMs)
>> zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which
>> copies packets from VM1's TX ring
>> buffer to VM2's RX ring buffer.
>
> Actually, there's a major difference here. You do copy in guest which
> consumes time slice of vcpu thread on host. Vhost_net do this in its
> own thread. So I feel vhost_net is even faster here, maybe I was wrong.
>
The code path using vhost_net is much longer - the Ping test shows that
the zcopy based vhost_net reports around 0.237ms,
while using vhost-pci it reports around 0.06 ms.
For some environment issue, I can report the throughput number later.
>>
>> That being said, we compared to vhost-user, instead of vhost_net,
>> because vhost-user is the one
>> that is used in NFV, which we think is a major use case for vhost-pci.
>
> If this is true, why not draft a pmd driver instead of a kernel one?
Yes, that's right. There are actually two directions of the vhost-pci
driver implementation - kernel driver
and dpdk pmd. The QEMU side device patches are first posted out for
discussion, because when the device
part is ready, we will be able to have the related team work on the pmd
driver as well. As usual, the pmd
driver would give a much better throughput.
So, I think at this stage we should focus on the device part review, and
use the kernel driver to prove that
the device part design and implementation is reasonable and functional.
> And do you use virtio-net kernel driver to compare the performance? If
> yes, has OVS dpdk optimized for kernel driver (I think not)?
>
We used the legacy OVS+DPDK.
Another thing with the existing OVS+DPDK usage is its centralization
property. With vhost-pci, we will be able to
de-centralize the usage.
> What's more important, if vhost-pci is faster, I think its kernel
> driver should be also faster than virtio-net, no?
Sorry about the confusion. We are actually not trying to use vhost-pci
to replace virtio-net. Rather, vhost-pci
can be viewed as another type of backend for virtio-net to be used in
NFV (the communication channel is
vhost-pci-net<->virtio_net).
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 9:00 ` Wei Wang
@ 2017-05-19 9:53 ` Jason Wang
2017-05-19 20:44 ` Michael S. Tsirkin
1 sibling, 0 replies; 52+ messages in thread
From: Jason Wang @ 2017-05-19 9:53 UTC (permalink / raw)
To: Wei Wang, stefanha, marcandre.lureau, mst, pbonzini, virtio-dev,
qemu-devel
On 2017年05月19日 17:00, Wei Wang wrote:
> On 05/19/2017 11:10 AM, Jason Wang wrote:
>>
>>
>> On 2017年05月18日 11:03, Wei Wang wrote:
>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>
>>>>
>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>
>>>>>>>
>>>>>>> Hi:
>>>>>>>
>>>>>>> Care to post the driver codes too?
>>>>>>>
>>>>>> OK. It may take some time to clean up the driver code before post
>>>>>> it out. You can first
>>>>>> have a check of the draft at the repo here:
>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>
>>>>>> Best,
>>>>>> Wei
>>>>>
>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>> have zerocopy support for tun for VM2VM traffic. Could you please
>>>>> try to compare it with your vhost-pci-net by:
>>>>>
>>> We can analyze from the whole data path - from VM1's network stack
>>> to send packets -> VM2's
>>> network stack to receive packets. The number of copies are actually
>>> the same for both.
>>
>> That's why I'm asking you to compare the performance. The only reason
>> for vhost-pci is performance. You should prove it.
>>
>>>
>>> vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
>>> from its network stack to VM2's
>>> RX ring buffer. (we call it "zerocopy" because there is no
>>> intermediate copy between VMs)
>>> zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which
>>> copies packets from VM1's TX ring
>>> buffer to VM2's RX ring buffer.
>>
>> Actually, there's a major difference here. You do copy in guest which
>> consumes time slice of vcpu thread on host. Vhost_net do this in its
>> own thread. So I feel vhost_net is even faster here, maybe I was wrong.
>>
>
> The code path using vhost_net is much longer - the Ping test shows
> that the zcopy based vhost_net reports around 0.237ms,
> while using vhost-pci it reports around 0.06 ms.
> For some environment issue, I can report the throughput number later.
Yes, vhost-pci should have better latency by design. But we should
measure pps or packet size other than 64 as well. I agree vhost_net has
bad latency, but this does not mean it could not be improved (just
because few people are working on improve this in the past), especially
we know the destination is another VM.
>
>>>
>>> That being said, we compared to vhost-user, instead of vhost_net,
>>> because vhost-user is the one
>>> that is used in NFV, which we think is a major use case for vhost-pci.
>>
>> If this is true, why not draft a pmd driver instead of a kernel one?
>
> Yes, that's right. There are actually two directions of the vhost-pci
> driver implementation - kernel driver
> and dpdk pmd. The QEMU side device patches are first posted out for
> discussion, because when the device
> part is ready, we will be able to have the related team work on the
> pmd driver as well. As usual, the pmd
> driver would give a much better throughput.
I think pmd should be easier for a prototype than kernel driver.
>
> So, I think at this stage we should focus on the device part review,
> and use the kernel driver to prove that
> the device part design and implementation is reasonable and functional.
>
Probably both.
>
>> And do you use virtio-net kernel driver to compare the performance?
>> If yes, has OVS dpdk optimized for kernel driver (I think not)?
>>
>
> We used the legacy OVS+DPDK.
> Another thing with the existing OVS+DPDK usage is its centralization
> property. With vhost-pci, we will be able to
> de-centralize the usage.
>
Right, so I think we should prove:
- For usage, prove or make vhost-pci better than existed share memory
based solution. (Or is virtio good at shared memory?)
- For performance, prove or make vhost-pci better than existed
centralized solution.
>> What's more important, if vhost-pci is faster, I think its kernel
>> driver should be also faster than virtio-net, no?
>
> Sorry about the confusion. We are actually not trying to use vhost-pci
> to replace virtio-net. Rather, vhost-pci
> can be viewed as another type of backend for virtio-net to be used in
> NFV (the communication channel is
> vhost-pci-net<->virtio_net).
My point is performance number is important for proving the correctness
for both design and engineering. If its slow, it has less interesting in
NFV.
Thanks
>
>
> Best,
> Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19 9:00 ` Wei Wang
@ 2017-05-19 15:33 ` Stefan Hajnoczi
2017-05-22 2:27 ` Jason Wang
2017-05-19 16:49 ` Michael S. Tsirkin
2 siblings, 1 reply; 52+ messages in thread
From: Stefan Hajnoczi @ 2017-05-19 15:33 UTC (permalink / raw)
To: Jason Wang
Cc: Wei Wang, marcandre.lureau, mst, pbonzini, virtio-dev, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1470 bytes --]
On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
> On 2017年05月18日 11:03, Wei Wang wrote:
> > On 05/17/2017 02:22 PM, Jason Wang wrote:
> > > On 2017年05月17日 14:16, Jason Wang wrote:
> > > > On 2017年05月16日 15:12, Wei Wang wrote:
> > > > > > Hi:
> > > > > >
> > > > > > Care to post the driver codes too?
> > > > > >
> > > > > OK. It may take some time to clean up the driver code before
> > > > > post it out. You can first
> > > > > have a check of the draft at the repo here:
> > > > > https://github.com/wei-w-wang/vhost-pci-driver
> > > > >
> > > > > Best,
> > > > > Wei
> > > >
> > > > Interesting, looks like there's one copy on tx side. We used to
> > > > have zerocopy support for tun for VM2VM traffic. Could you
> > > > please try to compare it with your vhost-pci-net by:
> > > >
> > We can analyze from the whole data path - from VM1's network stack to
> > send packets -> VM2's
> > network stack to receive packets. The number of copies are actually the
> > same for both.
>
> That's why I'm asking you to compare the performance. The only reason for
> vhost-pci is performance. You should prove it.
There is another reason for vhost-pci besides maximum performance:
vhost-pci makes it possible for end-users to run networking or storage
appliances in compute clouds. Cloud providers do not allow end-users to
run custom vhost-user processes on the host so you need vhost-pci.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19 9:00 ` Wei Wang
2017-05-19 15:33 ` Stefan Hajnoczi
@ 2017-05-19 16:49 ` Michael S. Tsirkin
2017-05-22 2:22 ` Jason Wang
2 siblings, 1 reply; 52+ messages in thread
From: Michael S. Tsirkin @ 2017-05-19 16:49 UTC (permalink / raw)
To: Jason Wang
Cc: Wei Wang, stefanha, marcandre.lureau, pbonzini, virtio-dev, qemu-devel
On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>
>
> On 2017年05月18日 11:03, Wei Wang wrote:
> > On 05/17/2017 02:22 PM, Jason Wang wrote:
> > >
> > >
> > > On 2017年05月17日 14:16, Jason Wang wrote:
> > > >
> > > >
> > > > On 2017年05月16日 15:12, Wei Wang wrote:
> > > > > > >
> > > > > >
> > > > > > Hi:
> > > > > >
> > > > > > Care to post the driver codes too?
> > > > > >
> > > > > OK. It may take some time to clean up the driver code before
> > > > > post it out. You can first
> > > > > have a check of the draft at the repo here:
> > > > > https://github.com/wei-w-wang/vhost-pci-driver
> > > > >
> > > > > Best,
> > > > > Wei
> > > >
> > > > Interesting, looks like there's one copy on tx side. We used to
> > > > have zerocopy support for tun for VM2VM traffic. Could you
> > > > please try to compare it with your vhost-pci-net by:
> > > >
> > We can analyze from the whole data path - from VM1's network stack to
> > send packets -> VM2's
> > network stack to receive packets. The number of copies are actually the
> > same for both.
>
> That's why I'm asking you to compare the performance. The only reason for
> vhost-pci is performance. You should prove it.
>
> >
> > vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
> > from its network stack to VM2's
> > RX ring buffer. (we call it "zerocopy" because there is no intermediate
> > copy between VMs)
> > zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which copies
> > packets from VM1's TX ring
> > buffer to VM2's RX ring buffer.
>
> Actually, there's a major difference here. You do copy in guest which
> consumes time slice of vcpu thread on host. Vhost_net do this in its own
> thread. So I feel vhost_net is even faster here, maybe I was wrong.
Yes but only if you have enough CPUs. The point of vhost-pci
is to put the switch in a VM and scale better with # of VMs.
> >
> > That being said, we compared to vhost-user, instead of vhost_net,
> > because vhost-user is the one
> > that is used in NFV, which we think is a major use case for vhost-pci.
>
> If this is true, why not draft a pmd driver instead of a kernel one? And do
> you use virtio-net kernel driver to compare the performance? If yes, has OVS
> dpdk optimized for kernel driver (I think not)?
>
> What's more important, if vhost-pci is faster, I think its kernel driver
> should be also faster than virtio-net, no?
If you have a vhost CPU per VCPU and can give a host CPU to each using
that will be faster. But not everyone has so many host CPUs.
> >
> >
> > > > - make sure zerocopy is enabled for vhost_net
> > > > - comment skb_orphan_frags() in tun_net_xmit()
> > > >
> > > > Thanks
> > > >
> > >
> > > You can even enable tx batching for tun by ethtool -C tap0 rx-frames
> > > N. This will greatly improve the performance according to my test.
> > >
> >
> > Thanks, but would this hurt latency?
> >
> > Best,
> > Wei
>
> I don't see this in my test.
>
> Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 9:00 ` Wei Wang
2017-05-19 9:53 ` Jason Wang
@ 2017-05-19 20:44 ` Michael S. Tsirkin
2017-05-23 11:09 ` Wei Wang
1 sibling, 1 reply; 52+ messages in thread
From: Michael S. Tsirkin @ 2017-05-19 20:44 UTC (permalink / raw)
To: Wei Wang
Cc: Jason Wang, stefanha, marcandre.lureau, pbonzini, virtio-dev, qemu-devel
On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote:
> > >
> > > That being said, we compared to vhost-user, instead of vhost_net,
> > > because vhost-user is the one
> > > that is used in NFV, which we think is a major use case for vhost-pci.
> >
> > If this is true, why not draft a pmd driver instead of a kernel one?
>
> Yes, that's right. There are actually two directions of the vhost-pci driver
> implementation - kernel driver
> and dpdk pmd. The QEMU side device patches are first posted out for
> discussion, because when the device
> part is ready, we will be able to have the related team work on the pmd
> driver as well. As usual, the pmd
> driver would give a much better throughput.
For PMD to work though, the protocol will need to support vIOMMU.
Not asking you to add it right now since it's work in progress
for vhost user at this point, but something you will have to
keep in mind. Further, reviewing vhost user iommu patches might be
a good idea for you.
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 16:49 ` Michael S. Tsirkin
@ 2017-05-22 2:22 ` Jason Wang
0 siblings, 0 replies; 52+ messages in thread
From: Jason Wang @ 2017-05-22 2:22 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: virtio-dev, stefanha, qemu-devel, Wei Wang, marcandre.lureau, pbonzini
On 2017年05月20日 00:49, Michael S. Tsirkin wrote:
> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>
>> On 2017年05月18日 11:03, Wei Wang wrote:
>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>
>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>
>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>> Hi:
>>>>>>>
>>>>>>> Care to post the driver codes too?
>>>>>>>
>>>>>> OK. It may take some time to clean up the driver code before
>>>>>> post it out. You can first
>>>>>> have a check of the draft at the repo here:
>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>
>>>>>> Best,
>>>>>> Wei
>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>> have zerocopy support for tun for VM2VM traffic. Could you
>>>>> please try to compare it with your vhost-pci-net by:
>>>>>
>>> We can analyze from the whole data path - from VM1's network stack to
>>> send packets -> VM2's
>>> network stack to receive packets. The number of copies are actually the
>>> same for both.
>> That's why I'm asking you to compare the performance. The only reason for
>> vhost-pci is performance. You should prove it.
>>
>>> vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
>>> from its network stack to VM2's
>>> RX ring buffer. (we call it "zerocopy" because there is no intermediate
>>> copy between VMs)
>>> zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which copies
>>> packets from VM1's TX ring
>>> buffer to VM2's RX ring buffer.
>> Actually, there's a major difference here. You do copy in guest which
>> consumes time slice of vcpu thread on host. Vhost_net do this in its own
>> thread. So I feel vhost_net is even faster here, maybe I was wrong.
> Yes but only if you have enough CPUs. The point of vhost-pci
> is to put the switch in a VM and scale better with # of VMs.
Does the overall performance really increase? I suspect the only thing
vhost-pci gains here is probably scheduling cost and copying in guest
should be slower than doing it in host.
>
>>> That being said, we compared to vhost-user, instead of vhost_net,
>>> because vhost-user is the one
>>> that is used in NFV, which we think is a major use case for vhost-pci.
>> If this is true, why not draft a pmd driver instead of a kernel one? And do
>> you use virtio-net kernel driver to compare the performance? If yes, has OVS
>> dpdk optimized for kernel driver (I think not)?
>>
>> What's more important, if vhost-pci is faster, I think its kernel driver
>> should be also faster than virtio-net, no?
> If you have a vhost CPU per VCPU and can give a host CPU to each using
> that will be faster. But not everyone has so many host CPUs.
If the major use case is NFV, we should have sufficient CPU resources I
believe?
Thanks
>
>
>>>
>>>>> - make sure zerocopy is enabled for vhost_net
>>>>> - comment skb_orphan_frags() in tun_net_xmit()
>>>>>
>>>>> Thanks
>>>>>
>>>> You can even enable tx batching for tun by ethtool -C tap0 rx-frames
>>>> N. This will greatly improve the performance according to my test.
>>>>
>>> Thanks, but would this hurt latency?
>>>
>>> Best,
>>> Wei
>> I don't see this in my test.
>>
>> Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 15:33 ` Stefan Hajnoczi
@ 2017-05-22 2:27 ` Jason Wang
2017-05-22 11:46 ` Wang, Wei W
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-22 2:27 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: virtio-dev, mst, qemu-devel, Wei Wang, marcandre.lureau, pbonzini
On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>> On 2017年05月18日 11:03, Wei Wang wrote:
>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>> Hi:
>>>>>>>
>>>>>>> Care to post the driver codes too?
>>>>>>>
>>>>>> OK. It may take some time to clean up the driver code before
>>>>>> post it out. You can first
>>>>>> have a check of the draft at the repo here:
>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>
>>>>>> Best,
>>>>>> Wei
>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>> have zerocopy support for tun for VM2VM traffic. Could you
>>>>> please try to compare it with your vhost-pci-net by:
>>>>>
>>> We can analyze from the whole data path - from VM1's network stack to
>>> send packets -> VM2's
>>> network stack to receive packets. The number of copies are actually the
>>> same for both.
>> That's why I'm asking you to compare the performance. The only reason for
>> vhost-pci is performance. You should prove it.
> There is another reason for vhost-pci besides maximum performance:
>
> vhost-pci makes it possible for end-users to run networking or storage
> appliances in compute clouds. Cloud providers do not allow end-users to
> run custom vhost-user processes on the host so you need vhost-pci.
>
> Stefan
Then it has non NFV use cases and the question goes back to the
performance comparing between vhost-pci and zerocopy vhost_net. If it
does not perform better, it was less interesting at least in this case.
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-22 2:27 ` Jason Wang
@ 2017-05-22 11:46 ` Wang, Wei W
2017-05-23 2:08 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wang, Wei W @ 2017-05-22 11:46 UTC (permalink / raw)
To: Jason Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, qemu-devel, marcandre.lureau, pbonzini
On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
> > On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
> >> On 2017年05月18日 11:03, Wei Wang wrote:
> >>> On 05/17/2017 02:22 PM, Jason Wang wrote:
> >>>> On 2017年05月17日 14:16, Jason Wang wrote:
> >>>>> On 2017年05月16日 15:12, Wei Wang wrote:
> >>>>>>> Hi:
> >>>>>>>
> >>>>>>> Care to post the driver codes too?
> >>>>>>>
> >>>>>> OK. It may take some time to clean up the driver code before post
> >>>>>> it out. You can first have a check of the draft at the repo here:
> >>>>>> https://github.com/wei-w-wang/vhost-pci-driver
> >>>>>>
> >>>>>> Best,
> >>>>>> Wei
> >>>>> Interesting, looks like there's one copy on tx side. We used to
> >>>>> have zerocopy support for tun for VM2VM traffic. Could you please
> >>>>> try to compare it with your vhost-pci-net by:
> >>>>>
> >>> We can analyze from the whole data path - from VM1's network stack
> >>> to send packets -> VM2's network stack to receive packets. The
> >>> number of copies are actually the same for both.
> >> That's why I'm asking you to compare the performance. The only reason
> >> for vhost-pci is performance. You should prove it.
> > There is another reason for vhost-pci besides maximum performance:
> >
> > vhost-pci makes it possible for end-users to run networking or storage
> > appliances in compute clouds. Cloud providers do not allow end-users
> > to run custom vhost-user processes on the host so you need vhost-pci.
> >
> > Stefan
>
> Then it has non NFV use cases and the question goes back to the performance
> comparing between vhost-pci and zerocopy vhost_net. If it does not perform
> better, it was less interesting at least in this case.
>
Probably I can share what we got about vhost-pci and vhost-user:
https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
Right now, I don’t have the environment to add the vhost_net test.
Btw, do you have data about vhost_net v.s. vhost_user?
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-22 11:46 ` Wang, Wei W
@ 2017-05-23 2:08 ` Jason Wang
2017-05-23 5:47 ` Wei Wang
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-23 2:08 UTC (permalink / raw)
To: Wang, Wei W, Stefan Hajnoczi
Cc: virtio-dev, pbonzini, marcandre.lureau, qemu-devel, mst
On 2017年05月22日 19:46, Wang, Wei W wrote:
> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>> Hi:
>>>>>>>>>
>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>
>>>>>>>> OK. It may take some time to clean up the driver code before post
>>>>>>>> it out. You can first have a check of the draft at the repo here:
>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>
>>>>>>>> Best,
>>>>>>>> Wei
>>>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you please
>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>
>>>>> We can analyze from the whole data path - from VM1's network stack
>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>> number of copies are actually the same for both.
>>>> That's why I'm asking you to compare the performance. The only reason
>>>> for vhost-pci is performance. You should prove it.
>>> There is another reason for vhost-pci besides maximum performance:
>>>
>>> vhost-pci makes it possible for end-users to run networking or storage
>>> appliances in compute clouds. Cloud providers do not allow end-users
>>> to run custom vhost-user processes on the host so you need vhost-pci.
>>>
>>> Stefan
>> Then it has non NFV use cases and the question goes back to the performance
>> comparing between vhost-pci and zerocopy vhost_net. If it does not perform
>> better, it was less interesting at least in this case.
>>
> Probably I can share what we got about vhost-pci and vhost-user:
> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
> Right now, I don’t have the environment to add the vhost_net test.
Thanks, the number looks good. But I have some questions:
- Is the number measured through your vhost-pci kernel driver code?
- Have you tested packet size other than 64B?
- Is zerocopy supported in OVS-dpdk? If yes, is it enabled in your test?
>
> Btw, do you have data about vhost_net v.s. vhost_user?
I haven't.
Thanks
>
> Best,
> Wei
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-23 2:08 ` Jason Wang
@ 2017-05-23 5:47 ` Wei Wang
2017-05-23 6:32 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-23 5:47 UTC (permalink / raw)
To: Jason Wang, Stefan Hajnoczi
Cc: virtio-dev, pbonzini, marcandre.lureau, qemu-devel, mst
On 05/23/2017 10:08 AM, Jason Wang wrote:
>
>
> On 2017年05月22日 19:46, Wang, Wei W wrote:
>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>> Hi:
>>>>>>>>>>
>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>
>>>>>>>>> OK. It may take some time to clean up the driver code before post
>>>>>>>>> it out. You can first have a check of the draft at the repo here:
>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>
>>>>>>>>> Best,
>>>>>>>>> Wei
>>>>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you please
>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>
>>>>>> We can analyze from the whole data path - from VM1's network stack
>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>> number of copies are actually the same for both.
>>>>> That's why I'm asking you to compare the performance. The only reason
>>>>> for vhost-pci is performance. You should prove it.
>>>> There is another reason for vhost-pci besides maximum performance:
>>>>
>>>> vhost-pci makes it possible for end-users to run networking or storage
>>>> appliances in compute clouds. Cloud providers do not allow end-users
>>>> to run custom vhost-user processes on the host so you need vhost-pci.
>>>>
>>>> Stefan
>>> Then it has non NFV use cases and the question goes back to the
>>> performance
>>> comparing between vhost-pci and zerocopy vhost_net. If it does not
>>> perform
>>> better, it was less interesting at least in this case.
>>>
>> Probably I can share what we got about vhost-pci and vhost-user:
>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>
>> Right now, I don’t have the environment to add the vhost_net test.
>
> Thanks, the number looks good. But I have some questions:
>
> - Is the number measured through your vhost-pci kernel driver code?
Yes, the kernel driver code.
> - Have you tested packet size other than 64B?
Not yet.
> - Is zerocopy supported in OVS-dpdk? If yes, is it enabled in your test?
zerocopy is not used in the test, but I don't think zerocopy can increase
the throughput to 2x. On the other side, we haven't put effort to optimize
the draft kernel driver yet.
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-23 5:47 ` Wei Wang
@ 2017-05-23 6:32 ` Jason Wang
2017-05-23 10:48 ` Wei Wang
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-23 6:32 UTC (permalink / raw)
To: Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 2017年05月23日 13:47, Wei Wang wrote:
> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>
>>
>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>> Hi:
>>>>>>>>>>>
>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>
>>>>>>>>>> OK. It may take some time to clean up the driver code before
>>>>>>>>>> post
>>>>>>>>>> it out. You can first have a check of the draft at the repo
>>>>>>>>>> here:
>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>> Wei
>>>>>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you please
>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>
>>>>>>> We can analyze from the whole data path - from VM1's network stack
>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>> number of copies are actually the same for both.
>>>>>> That's why I'm asking you to compare the performance. The only
>>>>>> reason
>>>>>> for vhost-pci is performance. You should prove it.
>>>>> There is another reason for vhost-pci besides maximum performance:
>>>>>
>>>>> vhost-pci makes it possible for end-users to run networking or
>>>>> storage
>>>>> appliances in compute clouds. Cloud providers do not allow end-users
>>>>> to run custom vhost-user processes on the host so you need vhost-pci.
>>>>>
>>>>> Stefan
>>>> Then it has non NFV use cases and the question goes back to the
>>>> performance
>>>> comparing between vhost-pci and zerocopy vhost_net. If it does not
>>>> perform
>>>> better, it was less interesting at least in this case.
>>>>
>>> Probably I can share what we got about vhost-pci and vhost-user:
>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>
>>> Right now, I don’t have the environment to add the vhost_net test.
>>
>> Thanks, the number looks good. But I have some questions:
>>
>> - Is the number measured through your vhost-pci kernel driver code?
>
> Yes, the kernel driver code.
Interesting, in the above link, "l2fwd" was used in vhost-pci testing. I
want to know more about the test configuration: If l2fwd is the one that
dpdk had, want to know how can you make it work for kernel driver.
(Maybe packet socket I think?) If not, want to know how do you configure
it (e.g through bridge or act_mirred or others). And in OVS dpdk, is
dpdk l2fwd + pmd used in the testing?
>
>> - Have you tested packet size other than 64B?
>
> Not yet.
Better to test more since the time spent on 64B copy should be very fast.
>
>> - Is zerocopy supported in OVS-dpdk? If yes, is it enabled in your test?
> zerocopy is not used in the test, but I don't think zerocopy can increase
> the throughput to 2x.
I agree, but we need prove this with numbers.
Thanks
> On the other side, we haven't put effort to optimize
> the draft kernel driver yet.
>
> Best,
> Wei
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-23 6:32 ` Jason Wang
@ 2017-05-23 10:48 ` Wei Wang
2017-05-24 3:24 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-23 10:48 UTC (permalink / raw)
To: Jason Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 05/23/2017 02:32 PM, Jason Wang wrote:
>
>
> On 2017年05月23日 13:47, Wei Wang wrote:
>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>> Hi:
>>>>>>>>>>>>
>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>
>>>>>>>>>>> OK. It may take some time to clean up the driver code before
>>>>>>>>>>> post
>>>>>>>>>>> it out. You can first have a check of the draft at the repo
>>>>>>>>>>> here:
>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>
>>>>>>>>>>> Best,
>>>>>>>>>>> Wei
>>>>>>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you
>>>>>>>>>> please
>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>
>>>>>>>> We can analyze from the whole data path - from VM1's network stack
>>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>>> number of copies are actually the same for both.
>>>>>>> That's why I'm asking you to compare the performance. The only
>>>>>>> reason
>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>> There is another reason for vhost-pci besides maximum performance:
>>>>>>
>>>>>> vhost-pci makes it possible for end-users to run networking or
>>>>>> storage
>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>> end-users
>>>>>> to run custom vhost-user processes on the host so you need
>>>>>> vhost-pci.
>>>>>>
>>>>>> Stefan
>>>>> Then it has non NFV use cases and the question goes back to the
>>>>> performance
>>>>> comparing between vhost-pci and zerocopy vhost_net. If it does not
>>>>> perform
>>>>> better, it was less interesting at least in this case.
>>>>>
>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>
>>>> Right now, I don’t have the environment to add the vhost_net test.
>>>
>>> Thanks, the number looks good. But I have some questions:
>>>
>>> - Is the number measured through your vhost-pci kernel driver code?
>>
>> Yes, the kernel driver code.
>
> Interesting, in the above link, "l2fwd" was used in vhost-pci testing.
> I want to know more about the test configuration: If l2fwd is the one
> that dpdk had, want to know how can you make it work for kernel
> driver. (Maybe packet socket I think?) If not, want to know how do you
> configure it (e.g through bridge or act_mirred or others). And in OVS
> dpdk, is dpdk l2fwd + pmd used in the testing?
>
Oh, that l2fwd is a kernel module from OPNFV vsperf
(http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
For both legacy and vhost-pci cases, they use the same l2fwd module.
No bridge is used, the module already works at L2 to forward packets
between two net devices.
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-19 20:44 ` Michael S. Tsirkin
@ 2017-05-23 11:09 ` Wei Wang
2017-05-23 15:15 ` Michael S. Tsirkin
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-23 11:09 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jason Wang, stefanha, marcandre.lureau, pbonzini, virtio-dev, qemu-devel
On 05/20/2017 04:44 AM, Michael S. Tsirkin wrote:
> On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote:
>>>> That being said, we compared to vhost-user, instead of vhost_net,
>>>> because vhost-user is the one
>>>> that is used in NFV, which we think is a major use case for vhost-pci.
>>> If this is true, why not draft a pmd driver instead of a kernel one?
>> Yes, that's right. There are actually two directions of the vhost-pci driver
>> implementation - kernel driver
>> and dpdk pmd. The QEMU side device patches are first posted out for
>> discussion, because when the device
>> part is ready, we will be able to have the related team work on the pmd
>> driver as well. As usual, the pmd
>> driver would give a much better throughput.
> For PMD to work though, the protocol will need to support vIOMMU.
> Not asking you to add it right now since it's work in progress
> for vhost user at this point, but something you will have to
> keep in mind. Further, reviewing vhost user iommu patches might be
> a good idea for you.
>
For the dpdk pmd case, I'm not sure if vIOMMU is necessary to be used -
Since it only needs to share a piece of memory between the two VMs, we
can only send that piece of memory info for sharing, instead of sending the
entire VM's memory and using vIOMMU to expose that accessible portion.
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-23 11:09 ` Wei Wang
@ 2017-05-23 15:15 ` Michael S. Tsirkin
0 siblings, 0 replies; 52+ messages in thread
From: Michael S. Tsirkin @ 2017-05-23 15:15 UTC (permalink / raw)
To: Wei Wang
Cc: Jason Wang, stefanha, marcandre.lureau, pbonzini, virtio-dev, qemu-devel
On Tue, May 23, 2017 at 07:09:05PM +0800, Wei Wang wrote:
> On 05/20/2017 04:44 AM, Michael S. Tsirkin wrote:
> > On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote:
> > > > > That being said, we compared to vhost-user, instead of vhost_net,
> > > > > because vhost-user is the one
> > > > > that is used in NFV, which we think is a major use case for vhost-pci.
> > > > If this is true, why not draft a pmd driver instead of a kernel one?
> > > Yes, that's right. There are actually two directions of the vhost-pci driver
> > > implementation - kernel driver
> > > and dpdk pmd. The QEMU side device patches are first posted out for
> > > discussion, because when the device
> > > part is ready, we will be able to have the related team work on the pmd
> > > driver as well. As usual, the pmd
> > > driver would give a much better throughput.
> > For PMD to work though, the protocol will need to support vIOMMU.
> > Not asking you to add it right now since it's work in progress
> > for vhost user at this point, but something you will have to
> > keep in mind. Further, reviewing vhost user iommu patches might be
> > a good idea for you.
> >
>
> For the dpdk pmd case, I'm not sure if vIOMMU is necessary to be used -
> Since it only needs to share a piece of memory between the two VMs, we
> can only send that piece of memory info for sharing, instead of sending the
> entire VM's memory and using vIOMMU to expose that accessible portion.
>
> Best,
> Wei
I am not sure I understand what you are saying here. My understanding is
that at the moment with VM1 using virtio and VM2 vhost pci, all of VM1's
memory is exposed to VM2. If VM1 is using a userspace driver, it needs a
way for the kernel to limit the memory regions which are accessible to
the device. At the moment this is done by VFIO by means of interacting
with a vIOMMU.
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-23 10:48 ` Wei Wang
@ 2017-05-24 3:24 ` Jason Wang
2017-05-24 8:31 ` Wei Wang
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-24 3:24 UTC (permalink / raw)
To: Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 2017年05月23日 18:48, Wei Wang wrote:
> On 05/23/2017 02:32 PM, Jason Wang wrote:
>>
>>
>> On 2017年05月23日 13:47, Wei Wang wrote:
>>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>>
>>>>
>>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>> Hi:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>>
>>>>>>>>>>>> OK. It may take some time to clean up the driver code
>>>>>>>>>>>> before post
>>>>>>>>>>>> it out. You can first have a check of the draft at the repo
>>>>>>>>>>>> here:
>>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>>
>>>>>>>>>>>> Best,
>>>>>>>>>>>> Wei
>>>>>>>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you
>>>>>>>>>>> please
>>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>>
>>>>>>>>> We can analyze from the whole data path - from VM1's network
>>>>>>>>> stack
>>>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>>>> number of copies are actually the same for both.
>>>>>>>> That's why I'm asking you to compare the performance. The only
>>>>>>>> reason
>>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>>> There is another reason for vhost-pci besides maximum performance:
>>>>>>>
>>>>>>> vhost-pci makes it possible for end-users to run networking or
>>>>>>> storage
>>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>>> end-users
>>>>>>> to run custom vhost-user processes on the host so you need
>>>>>>> vhost-pci.
>>>>>>>
>>>>>>> Stefan
>>>>>> Then it has non NFV use cases and the question goes back to the
>>>>>> performance
>>>>>> comparing between vhost-pci and zerocopy vhost_net. If it does
>>>>>> not perform
>>>>>> better, it was less interesting at least in this case.
>>>>>>
>>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>>
>>>>> Right now, I don’t have the environment to add the vhost_net test.
>>>>
>>>> Thanks, the number looks good. But I have some questions:
>>>>
>>>> - Is the number measured through your vhost-pci kernel driver code?
>>>
>>> Yes, the kernel driver code.
>>
>> Interesting, in the above link, "l2fwd" was used in vhost-pci
>> testing. I want to know more about the test configuration: If l2fwd
>> is the one that dpdk had, want to know how can you make it work for
>> kernel driver. (Maybe packet socket I think?) If not, want to know
>> how do you configure it (e.g through bridge or act_mirred or others).
>> And in OVS dpdk, is dpdk l2fwd + pmd used in the testing?
>>
>
> Oh, that l2fwd is a kernel module from OPNFV vsperf
> (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
> For both legacy and vhost-pci cases, they use the same l2fwd module.
> No bridge is used, the module already works at L2 to forward packets
> between two net devices.
Thanks for the pointer. Just to confirm, I think virtio-net kernel
driver is used in OVS-dpdk test?
Another question is, can we manage to remove the copy in tx? If not, is
it a limitation of virtio protocol?
Thanks
>
> Best,
> Wei
>
>
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-24 3:24 ` Jason Wang
@ 2017-05-24 8:31 ` Wei Wang
2017-05-25 7:59 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-24 8:31 UTC (permalink / raw)
To: Jason Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 05/24/2017 11:24 AM, Jason Wang wrote:
>
>
> On 2017年05月23日 18:48, Wei Wang wrote:
>> On 05/23/2017 02:32 PM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月23日 13:47, Wei Wang wrote:
>>>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>> Hi:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>>>
>>>>>>>>>>>>> OK. It may take some time to clean up the driver code
>>>>>>>>>>>>> before post
>>>>>>>>>>>>> it out. You can first have a check of the draft at the
>>>>>>>>>>>>> repo here:
>>>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>>>
>>>>>>>>>>>>> Best,
>>>>>>>>>>>>> Wei
>>>>>>>>>>>> Interesting, looks like there's one copy on tx side. We
>>>>>>>>>>>> used to
>>>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you
>>>>>>>>>>>> please
>>>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>>>
>>>>>>>>>> We can analyze from the whole data path - from VM1's network
>>>>>>>>>> stack
>>>>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>>>>> number of copies are actually the same for both.
>>>>>>>>> That's why I'm asking you to compare the performance. The only
>>>>>>>>> reason
>>>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>>>> There is another reason for vhost-pci besides maximum performance:
>>>>>>>>
>>>>>>>> vhost-pci makes it possible for end-users to run networking or
>>>>>>>> storage
>>>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>>>> end-users
>>>>>>>> to run custom vhost-user processes on the host so you need
>>>>>>>> vhost-pci.
>>>>>>>>
>>>>>>>> Stefan
>>>>>>> Then it has non NFV use cases and the question goes back to the
>>>>>>> performance
>>>>>>> comparing between vhost-pci and zerocopy vhost_net. If it does
>>>>>>> not perform
>>>>>>> better, it was less interesting at least in this case.
>>>>>>>
>>>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>>>
>>>>>> Right now, I don’t have the environment to add the vhost_net test.
>>>>>
>>>>> Thanks, the number looks good. But I have some questions:
>>>>>
>>>>> - Is the number measured through your vhost-pci kernel driver code?
>>>>
>>>> Yes, the kernel driver code.
>>>
>>> Interesting, in the above link, "l2fwd" was used in vhost-pci
>>> testing. I want to know more about the test configuration: If l2fwd
>>> is the one that dpdk had, want to know how can you make it work for
>>> kernel driver. (Maybe packet socket I think?) If not, want to know
>>> how do you configure it (e.g through bridge or act_mirred or
>>> others). And in OVS dpdk, is dpdk l2fwd + pmd used in the testing?
>>>
>>
>> Oh, that l2fwd is a kernel module from OPNFV vsperf
>> (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
>> For both legacy and vhost-pci cases, they use the same l2fwd module.
>> No bridge is used, the module already works at L2 to forward packets
>> between two net devices.
>
> Thanks for the pointer. Just to confirm, I think virtio-net kernel
> driver is used in OVS-dpdk test?
Yes. In both cases, the guests are using kernel drivers.
>
> Another question is, can we manage to remove the copy in tx? If not,
> is it a limitation of virtio protocol?
>
No, we can't. Use this example, VM1's Vhost-pci<->virtio-net of VM2, VM1
sees VM2's memory, but
VM2 only sees its own memory.
What this copy achieves is to get data from VM1's memory to VM2's
memory, so that VM2 can deliver it's
own memory to its network stack.
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-24 8:31 ` Wei Wang
@ 2017-05-25 7:59 ` Jason Wang
2017-05-25 12:01 ` Wei Wang
0 siblings, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-25 7:59 UTC (permalink / raw)
To: Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, pbonzini, marcandre.lureau, qemu-devel, mst
On 2017年05月24日 16:31, Wei Wang wrote:
> On 05/24/2017 11:24 AM, Jason Wang wrote:
>>
>>
>> On 2017年05月23日 18:48, Wei Wang wrote:
>>> On 05/23/2017 02:32 PM, Jason Wang wrote:
>>>>
>>>>
>>>> On 2017年05月23日 13:47, Wei Wang wrote:
>>>>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>>>>
>>>>>>
>>>>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>>> Hi:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> OK. It may take some time to clean up the driver code
>>>>>>>>>>>>>> before post
>>>>>>>>>>>>>> it out. You can first have a check of the draft at the
>>>>>>>>>>>>>> repo here:
>>>>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Best,
>>>>>>>>>>>>>> Wei
>>>>>>>>>>>>> Interesting, looks like there's one copy on tx side. We
>>>>>>>>>>>>> used to
>>>>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you
>>>>>>>>>>>>> please
>>>>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>>>>
>>>>>>>>>>> We can analyze from the whole data path - from VM1's network
>>>>>>>>>>> stack
>>>>>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>>>>>> number of copies are actually the same for both.
>>>>>>>>>> That's why I'm asking you to compare the performance. The
>>>>>>>>>> only reason
>>>>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>>>>> There is another reason for vhost-pci besides maximum
>>>>>>>>> performance:
>>>>>>>>>
>>>>>>>>> vhost-pci makes it possible for end-users to run networking or
>>>>>>>>> storage
>>>>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>>>>> end-users
>>>>>>>>> to run custom vhost-user processes on the host so you need
>>>>>>>>> vhost-pci.
>>>>>>>>>
>>>>>>>>> Stefan
>>>>>>>> Then it has non NFV use cases and the question goes back to the
>>>>>>>> performance
>>>>>>>> comparing between vhost-pci and zerocopy vhost_net. If it does
>>>>>>>> not perform
>>>>>>>> better, it was less interesting at least in this case.
>>>>>>>>
>>>>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>>>>
>>>>>>> Right now, I don’t have the environment to add the vhost_net test.
>>>>>>
>>>>>> Thanks, the number looks good. But I have some questions:
>>>>>>
>>>>>> - Is the number measured through your vhost-pci kernel driver code?
>>>>>
>>>>> Yes, the kernel driver code.
>>>>
>>>> Interesting, in the above link, "l2fwd" was used in vhost-pci
>>>> testing. I want to know more about the test configuration: If l2fwd
>>>> is the one that dpdk had, want to know how can you make it work for
>>>> kernel driver. (Maybe packet socket I think?) If not, want to know
>>>> how do you configure it (e.g through bridge or act_mirred or
>>>> others). And in OVS dpdk, is dpdk l2fwd + pmd used in the testing?
>>>>
>>>
>>> Oh, that l2fwd is a kernel module from OPNFV vsperf
>>> (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
>>> For both legacy and vhost-pci cases, they use the same l2fwd module.
>>> No bridge is used, the module already works at L2 to forward packets
>>> between two net devices.
>>
>> Thanks for the pointer. Just to confirm, I think virtio-net kernel
>> driver is used in OVS-dpdk test?
>
> Yes. In both cases, the guests are using kernel drivers.
>
>>
>> Another question is, can we manage to remove the copy in tx? If not,
>> is it a limitation of virtio protocol?
>>
>
> No, we can't. Use this example, VM1's Vhost-pci<->virtio-net of VM2,
> VM1 sees VM2's memory, but
> VM2 only sees its own memory.
> What this copy achieves is to get data from VM1's memory to VM2's
> memory, so that VM2 can deliver it's
> own memory to its network stack.
Then, as has been pointed out. Should we consider a vhost-pci to
vhost-pci peer?
Even with vhost-pci to virito-net configuration, I think rx zerocopy
could be achieved but not implemented in your driver (probably more
easier in pmd).
Thanks
>
> Best,
> Wei
>
>
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 7:59 ` Jason Wang
@ 2017-05-25 12:01 ` Wei Wang
2017-05-25 12:22 ` Jason Wang
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-05-25 12:01 UTC (permalink / raw)
To: Jason Wang, Stefan Hajnoczi
Cc: virtio-dev, pbonzini, marcandre.lureau, qemu-devel, mst
On 05/25/2017 03:59 PM, Jason Wang wrote:
>
>
> On 2017年05月24日 16:31, Wei Wang wrote:
>> On 05/24/2017 11:24 AM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月23日 18:48, Wei Wang wrote:
>>>> On 05/23/2017 02:32 PM, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2017年05月23日 13:47, Wei Wang wrote:
>>>>>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>>>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>>>> Hi:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> OK. It may take some time to clean up the driver code
>>>>>>>>>>>>>>> before post
>>>>>>>>>>>>>>> it out. You can first have a check of the draft at the
>>>>>>>>>>>>>>> repo here:
>>>>>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Best,
>>>>>>>>>>>>>>> Wei
>>>>>>>>>>>>>> Interesting, looks like there's one copy on tx side. We
>>>>>>>>>>>>>> used to
>>>>>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could
>>>>>>>>>>>>>> you please
>>>>>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>>>>>
>>>>>>>>>>>> We can analyze from the whole data path - from VM1's
>>>>>>>>>>>> network stack
>>>>>>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>>>>>>> number of copies are actually the same for both.
>>>>>>>>>>> That's why I'm asking you to compare the performance. The
>>>>>>>>>>> only reason
>>>>>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>>>>>> There is another reason for vhost-pci besides maximum
>>>>>>>>>> performance:
>>>>>>>>>>
>>>>>>>>>> vhost-pci makes it possible for end-users to run networking
>>>>>>>>>> or storage
>>>>>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>>>>>> end-users
>>>>>>>>>> to run custom vhost-user processes on the host so you need
>>>>>>>>>> vhost-pci.
>>>>>>>>>>
>>>>>>>>>> Stefan
>>>>>>>>> Then it has non NFV use cases and the question goes back to
>>>>>>>>> the performance
>>>>>>>>> comparing between vhost-pci and zerocopy vhost_net. If it does
>>>>>>>>> not perform
>>>>>>>>> better, it was less interesting at least in this case.
>>>>>>>>>
>>>>>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>>>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>>>>>
>>>>>>>> Right now, I don’t have the environment to add the vhost_net test.
>>>>>>>
>>>>>>> Thanks, the number looks good. But I have some questions:
>>>>>>>
>>>>>>> - Is the number measured through your vhost-pci kernel driver code?
>>>>>>
>>>>>> Yes, the kernel driver code.
>>>>>
>>>>> Interesting, in the above link, "l2fwd" was used in vhost-pci
>>>>> testing. I want to know more about the test configuration: If
>>>>> l2fwd is the one that dpdk had, want to know how can you make it
>>>>> work for kernel driver. (Maybe packet socket I think?) If not,
>>>>> want to know how do you configure it (e.g through bridge or
>>>>> act_mirred or others). And in OVS dpdk, is dpdk l2fwd + pmd used
>>>>> in the testing?
>>>>>
>>>>
>>>> Oh, that l2fwd is a kernel module from OPNFV vsperf
>>>> (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
>>>>
>>>> For both legacy and vhost-pci cases, they use the same l2fwd module.
>>>> No bridge is used, the module already works at L2 to forward packets
>>>> between two net devices.
>>>
>>> Thanks for the pointer. Just to confirm, I think virtio-net kernel
>>> driver is used in OVS-dpdk test?
>>
>> Yes. In both cases, the guests are using kernel drivers.
>>
>>>
>>> Another question is, can we manage to remove the copy in tx? If not,
>>> is it a limitation of virtio protocol?
>>>
>>
>> No, we can't. Use this example, VM1's Vhost-pci<->virtio-net of VM2,
>> VM1 sees VM2's memory, but
>> VM2 only sees its own memory.
>> What this copy achieves is to get data from VM1's memory to VM2's
>> memory, so that VM2 can deliver it's
>> own memory to its network stack.
>
> Then, as has been pointed out. Should we consider a vhost-pci to
> vhost-pci peer?
I think that's another direction or future extension.
We already have the vhost-pci to virtio-net model on the way, so I think
it would be better to start from here.
>
> Even with vhost-pci to virito-net configuration, I think rx zerocopy
> could be achieved but not implemented in your driver (probably more
> easier in pmd).
>
Yes, it would be easier with dpdk pmd. But I think it would not be
important in the NFV use case,
since the data flow goes to one direction often.
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 12:01 ` Wei Wang
@ 2017-05-25 12:22 ` Jason Wang
2017-05-25 12:31 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 14:35 ` [Qemu-devel] " Eric Blake
0 siblings, 2 replies; 52+ messages in thread
From: Jason Wang @ 2017-05-25 12:22 UTC (permalink / raw)
To: Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 2017年05月25日 20:01, Wei Wang wrote:
> On 05/25/2017 03:59 PM, Jason Wang wrote:
>>
>>
>> On 2017年05月24日 16:31, Wei Wang wrote:
>>> On 05/24/2017 11:24 AM, Jason Wang wrote:
>>>>
>>>>
>>>> On 2017年05月23日 18:48, Wei Wang wrote:
>>>>> On 05/23/2017 02:32 PM, Jason Wang wrote:
>>>>>>
>>>>>>
>>>>>> On 2017年05月23日 13:47, Wei Wang wrote:
>>>>>>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>>>>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>>>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>>>>> Hi:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> OK. It may take some time to clean up the driver code
>>>>>>>>>>>>>>>> before post
>>>>>>>>>>>>>>>> it out. You can first have a check of the draft at the
>>>>>>>>>>>>>>>> repo here:
>>>>>>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Best,
>>>>>>>>>>>>>>>> Wei
>>>>>>>>>>>>>>> Interesting, looks like there's one copy on tx side. We
>>>>>>>>>>>>>>> used to
>>>>>>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could
>>>>>>>>>>>>>>> you please
>>>>>>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>> We can analyze from the whole data path - from VM1's
>>>>>>>>>>>>> network stack
>>>>>>>>>>>>> to send packets -> VM2's network stack to receive packets.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> number of copies are actually the same for both.
>>>>>>>>>>>> That's why I'm asking you to compare the performance. The
>>>>>>>>>>>> only reason
>>>>>>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>>>>>>> There is another reason for vhost-pci besides maximum
>>>>>>>>>>> performance:
>>>>>>>>>>>
>>>>>>>>>>> vhost-pci makes it possible for end-users to run networking
>>>>>>>>>>> or storage
>>>>>>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>>>>>>> end-users
>>>>>>>>>>> to run custom vhost-user processes on the host so you need
>>>>>>>>>>> vhost-pci.
>>>>>>>>>>>
>>>>>>>>>>> Stefan
>>>>>>>>>> Then it has non NFV use cases and the question goes back to
>>>>>>>>>> the performance
>>>>>>>>>> comparing between vhost-pci and zerocopy vhost_net. If it
>>>>>>>>>> does not perform
>>>>>>>>>> better, it was less interesting at least in this case.
>>>>>>>>>>
>>>>>>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>>>>>>
>>>>>>>>> Right now, I don’t have the environment to add the vhost_net
>>>>>>>>> test.
>>>>>>>>
>>>>>>>> Thanks, the number looks good. But I have some questions:
>>>>>>>>
>>>>>>>> - Is the number measured through your vhost-pci kernel driver
>>>>>>>> code?
>>>>>>>
>>>>>>> Yes, the kernel driver code.
>>>>>>
>>>>>> Interesting, in the above link, "l2fwd" was used in vhost-pci
>>>>>> testing. I want to know more about the test configuration: If
>>>>>> l2fwd is the one that dpdk had, want to know how can you make it
>>>>>> work for kernel driver. (Maybe packet socket I think?) If not,
>>>>>> want to know how do you configure it (e.g through bridge or
>>>>>> act_mirred or others). And in OVS dpdk, is dpdk l2fwd + pmd used
>>>>>> in the testing?
>>>>>>
>>>>>
>>>>> Oh, that l2fwd is a kernel module from OPNFV vsperf
>>>>> (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
>>>>>
>>>>> For both legacy and vhost-pci cases, they use the same l2fwd module.
>>>>> No bridge is used, the module already works at L2 to forward packets
>>>>> between two net devices.
>>>>
>>>> Thanks for the pointer. Just to confirm, I think virtio-net kernel
>>>> driver is used in OVS-dpdk test?
>>>
>>> Yes. In both cases, the guests are using kernel drivers.
>>>
>>>>
>>>> Another question is, can we manage to remove the copy in tx? If
>>>> not, is it a limitation of virtio protocol?
>>>>
>>>
>>> No, we can't. Use this example, VM1's Vhost-pci<->virtio-net of VM2,
>>> VM1 sees VM2's memory, but
>>> VM2 only sees its own memory.
>>> What this copy achieves is to get data from VM1's memory to VM2's
>>> memory, so that VM2 can deliver it's
>>> own memory to its network stack.
>>
>> Then, as has been pointed out. Should we consider a vhost-pci to
>> vhost-pci peer?
> I think that's another direction or future extension.
> We already have the vhost-pci to virtio-net model on the way, so I
> think it would be better to start from here.
>
If vhost-pci to vhost-pci is obvious superior, why not try this consider
we're at rather early stage for vhost-pci?
>
>>
>> Even with vhost-pci to virito-net configuration, I think rx zerocopy
>> could be achieved but not implemented in your driver (probably more
>> easier in pmd).
>>
> Yes, it would be easier with dpdk pmd. But I think it would not be
> important in the NFV use case,
> since the data flow goes to one direction often.
>
> Best,
> Wei
>
I would say let's don't give up on any possible performance optimization
now. You can do it in the future.
If you still want to keep the copy in both tx and rx, you'd better:
- measure the performance of larger packet size other than 64B
- consider whether or not it's a good idea to do it in vcpu thread, or
move it to another one(s)
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 12:22 ` Jason Wang
@ 2017-05-25 12:31 ` Jason Wang
2017-05-25 17:57 ` Michael S. Tsirkin
2017-05-25 14:35 ` [Qemu-devel] " Eric Blake
1 sibling, 1 reply; 52+ messages in thread
From: Jason Wang @ 2017-05-25 12:31 UTC (permalink / raw)
To: Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 2017年05月25日 20:22, Jason Wang wrote:
>>>
>>> Even with vhost-pci to virito-net configuration, I think rx zerocopy
>>> could be achieved but not implemented in your driver (probably more
>>> easier in pmd).
>>>
>> Yes, it would be easier with dpdk pmd. But I think it would not be
>> important in the NFV use case,
>> since the data flow goes to one direction often.
>>
>> Best,
>> Wei
>>
>
> I would say let's don't give up on any possible performance
> optimization now. You can do it in the future.
>
> If you still want to keep the copy in both tx and rx, you'd better:
>
> - measure the performance of larger packet size other than 64B
> - consider whether or not it's a good idea to do it in vcpu thread, or
> move it to another one(s)
>
> Thanks
And what's more important, since you care NFV seriously. I would really
suggest you to draft a pmd for vhost-pci and use it to for benchmarking.
It's real life case. OVS dpdk is known for not optimized for kernel drivers.
Good performance number can help us to examine the correctness of both
design and implementation.
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 12:22 ` Jason Wang
2017-05-25 12:31 ` [Qemu-devel] [virtio-dev] " Jason Wang
@ 2017-05-25 14:35 ` Eric Blake
2017-05-26 4:26 ` Jason Wang
1 sibling, 1 reply; 52+ messages in thread
From: Eric Blake @ 2017-05-25 14:35 UTC (permalink / raw)
To: Jason Wang, Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, pbonzini, marcandre.lureau, qemu-devel, mst
[-- Attachment #1: Type: text/plain, Size: 1140 bytes --]
[meta-comment]
On 05/25/2017 07:22 AM, Jason Wang wrote:
>
[snip]
>>>>>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>>>>>> Hi:
17 levels of '>' when I add my reply. Wow.
>> I think that's another direction or future extension.
>> We already have the vhost-pci to virtio-net model on the way, so I
>> think it would be better to start from here.
>>
>
> If vhost-pci to vhost-pci is obvious superior, why not try this consider
> we're at rather early stage for vhost-pci?
>
I have to scroll a couple of screens past heavily-quoted material before
getting to the start of the additions to the thread. It's not only
okay, but recommended, to trim your replies down to relevant context so
that it is easier to get to your additions (3 or 4 levels of quoted
material can still be relevant, but 17 levels is usually a sign that you
are including too much). Readers coming in mid-thread can still refer
to the public archives if they want more context.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 12:31 ` [Qemu-devel] [virtio-dev] " Jason Wang
@ 2017-05-25 17:57 ` Michael S. Tsirkin
2017-06-04 10:34 ` Wei Wang
0 siblings, 1 reply; 52+ messages in thread
From: Michael S. Tsirkin @ 2017-05-25 17:57 UTC (permalink / raw)
To: Jason Wang
Cc: Wei Wang, Stefan Hajnoczi, virtio-dev, marcandre.lureau,
qemu-devel, pbonzini
On Thu, May 25, 2017 at 08:31:09PM +0800, Jason Wang wrote:
>
>
> On 2017年05月25日 20:22, Jason Wang wrote:
> > > >
> > > > Even with vhost-pci to virito-net configuration, I think rx
> > > > zerocopy could be achieved but not implemented in your driver
> > > > (probably more easier in pmd).
> > > >
> > > Yes, it would be easier with dpdk pmd. But I think it would not be
> > > important in the NFV use case,
> > > since the data flow goes to one direction often.
> > >
> > > Best,
> > > Wei
> > >
> >
> > I would say let's don't give up on any possible performance optimization
> > now. You can do it in the future.
> >
> > If you still want to keep the copy in both tx and rx, you'd better:
> >
> > - measure the performance of larger packet size other than 64B
> > - consider whether or not it's a good idea to do it in vcpu thread, or
> > move it to another one(s)
> >
> > Thanks
>
> And what's more important, since you care NFV seriously. I would really
> suggest you to draft a pmd for vhost-pci and use it to for benchmarking.
> It's real life case. OVS dpdk is known for not optimized for kernel drivers.
>
> Good performance number can help us to examine the correctness of both
> design and implementation.
>
> Thanks
I think that's a very valid point. Linux isn't currently optimized to
handle packets in device BAR.
There are several issues here and you do need to address them in the
kernel, no way around that:
1. lots of drivers set protection to
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vfio certainly does, and so I think does pci sysfs.
You won't get good performance with this, you want to use
a cacheable mapping.
This needs to be addressed for pmd to work well.
2. linux mostly assumes PCI BAR isn't memory, ioremap_cache returns __iomem
pointers which aren't supposed to be dereferenced directly.
You want a new API that does direct remap or copy if not possible.
Alternatively remap or fail, kind of like pci_remap_iospace.
Maybe there's already something like that - I'm not sure.
Thanks,
MST
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 14:35 ` [Qemu-devel] " Eric Blake
@ 2017-05-26 4:26 ` Jason Wang
0 siblings, 0 replies; 52+ messages in thread
From: Jason Wang @ 2017-05-26 4:26 UTC (permalink / raw)
To: Eric Blake, Wei Wang, Stefan Hajnoczi
Cc: virtio-dev, mst, marcandre.lureau, qemu-devel, pbonzini
On 2017年05月25日 22:35, Eric Blake wrote:
> [meta-comment]
>
> On 05/25/2017 07:22 AM, Jason Wang wrote:
> [snip]
>
>>>>>>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>>>>>>> Hi:
> 17 levels of '>' when I add my reply. Wow.
>
>>> I think that's another direction or future extension.
>>> We already have the vhost-pci to virtio-net model on the way, so I
>>> think it would be better to start from here.
>>>
>> If vhost-pci to vhost-pci is obvious superior, why not try this consider
>> we're at rather early stage for vhost-pci?
>>
> I have to scroll a couple of screens past heavily-quoted material before
> getting to the start of the additions to the thread. It's not only
> okay, but recommended, to trim your replies down to relevant context so
> that it is easier to get to your additions (3 or 4 levels of quoted
> material can still be relevant, but 17 levels is usually a sign that you
> are including too much). Readers coming in mid-thread can still refer
> to the public archives if they want more context.
>
Ok, will do.
Thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-05-25 17:57 ` Michael S. Tsirkin
@ 2017-06-04 10:34 ` Wei Wang
2017-06-05 2:21 ` Michael S. Tsirkin
0 siblings, 1 reply; 52+ messages in thread
From: Wei Wang @ 2017-06-04 10:34 UTC (permalink / raw)
To: Michael S. Tsirkin, Jason Wang
Cc: Stefan Hajnoczi, virtio-dev, marcandre.lureau, qemu-devel, pbonzini
On 05/26/2017 01:57 AM, Michael S. Tsirkin wrote:
>
> I think that's a very valid point. Linux isn't currently optimized to
> handle packets in device BAR.
>
> There are several issues here and you do need to address them in the
> kernel, no way around that:
>
> 1. lots of drivers set protection to
> vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>
Sorry for my late reply.
In the implementation tests, I didn't find an issue when letting the
guest directly access the bar MMIO returned by ioremap_cache().
If that's conventionally improper, we can probably make a new
function similar to ioremap_cache, as the 2nd comment suggests
below.
So, in any case, the vhost-pci driver uses ioremap_cache() or a similar
function, which sets the memory type to WB.
> vfio certainly does, and so I think does pci sysfs.
> You won't get good performance with this, you want to use
> a cacheable mapping.
> This needs to be addressed for pmd to work well.
In case it's useful for the discussion here, introduce a little background
about how the bar MMIO is used in vhost-pci:
The device in QEMU sets up the MemoryRegion of the bar as "ram" type,
which will finally have translation mappings created in EPT. So, the memory
setup of the bar is the same as adding a regular RAM. It's like we are
passing through a bar memory to the guest which allows the guest to
directly access to the bar memory.
Back to the comments, why it is not cacheable memory when the
vhost-pci driver explicitly uses ioremap_cache()?
>
> 2. linux mostly assumes PCI BAR isn't memory, ioremap_cache returns __iomem
> pointers which aren't supposed to be dereferenced directly.
> You want a new API that does direct remap or copy if not possible.
> Alternatively remap or fail, kind of like pci_remap_iospace.
> Maybe there's already something like that - I'm not sure.
>
For the vhost-pci case, the bar is known to be a portion physical memory.
So, in this case, would it be an issue if the driver directly accesses
to it?
(as mentioned above, the implementation functions correctly)
Best,
Wei
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
2017-06-04 10:34 ` Wei Wang
@ 2017-06-05 2:21 ` Michael S. Tsirkin
0 siblings, 0 replies; 52+ messages in thread
From: Michael S. Tsirkin @ 2017-06-05 2:21 UTC (permalink / raw)
To: Wei Wang
Cc: Jason Wang, Stefan Hajnoczi, virtio-dev, marcandre.lureau,
qemu-devel, pbonzini
On Sun, Jun 04, 2017 at 06:34:45PM +0800, Wei Wang wrote:
> On 05/26/2017 01:57 AM, Michael S. Tsirkin wrote:
> >
> > I think that's a very valid point. Linux isn't currently optimized to
> > handle packets in device BAR.
> > There are several issues here and you do need to address them in the
> > kernel, no way around that:
> >
> > 1. lots of drivers set protection to
> > vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> >
> Sorry for my late reply.
>
> In the implementation tests, I didn't find an issue when letting the
> guest directly access the bar MMIO returned by ioremap_cache().
> If that's conventionally improper, we can probably make a new
> function similar to ioremap_cache, as the 2nd comment suggests
> below.
Right. And just disable the driver on architectures that don't support it.
> So, in any case, the vhost-pci driver uses ioremap_cache() or a similar
> function, which sets the memory type to WB.
>
And that's great. AFAIK VFIO doesn't though, you will need to
teach it to do that to use userspace drivers.
>
> > vfio certainly does, and so I think does pci sysfs.
> > You won't get good performance with this, you want to use
> > a cacheable mapping.
> > This needs to be addressed for pmd to work well.
>
> In case it's useful for the discussion here, introduce a little background
> about how the bar MMIO is used in vhost-pci:
> The device in QEMU sets up the MemoryRegion of the bar as "ram" type,
> which will finally have translation mappings created in EPT. So, the memory
> setup of the bar is the same as adding a regular RAM. It's like we are
> passing through a bar memory to the guest which allows the guest to
> directly access to the bar memory.
>
> Back to the comments, why it is not cacheable memory when the
> vhost-pci driver explicitly uses ioremap_cache()?
It is. But when you write a userspace driver, you will need
to teach vfio to allow cacheable access from userspace.
> >
> > 2. linux mostly assumes PCI BAR isn't memory, ioremap_cache returns __iomem
> > pointers which aren't supposed to be dereferenced directly.
> > You want a new API that does direct remap or copy if not possible.
> > Alternatively remap or fail, kind of like pci_remap_iospace.
> > Maybe there's already something like that - I'm not sure.
> >
>
> For the vhost-pci case, the bar is known to be a portion physical memory.
Yes but AFAIK __iomem mappings still can't be portably dereferenced on all
architectures. ioremap_cache simply doesn't always give you
a dereferencable address.
> So, in this case, would it be an issue if the driver directly accesses to
> it?
> (as mentioned above, the implementation functions correctly)
>
> Best,
> Wei
you mean like this:
void __iomem *baseptr = ioremap_cache(....);
unsigned long signature = *(unsigned int *)baseptr;
It works on intel. sparse will complain though. See
Documentation/bus-virt-phys-mapping.txt
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
end of thread, other threads:[~2017-06-05 2:21 UTC | newest]
Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
2017-05-15 0:21 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
2017-05-12 8:51 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
2017-05-12 9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
2017-05-16 15:21 ` Michael S. Tsirkin
2017-05-16 6:46 ` Jason Wang
2017-05-16 7:12 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-17 6:16 ` Jason Wang
2017-05-17 6:22 ` Jason Wang
2017-05-18 3:03 ` Wei Wang
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19 9:00 ` Wei Wang
2017-05-19 9:53 ` Jason Wang
2017-05-19 20:44 ` Michael S. Tsirkin
2017-05-23 11:09 ` Wei Wang
2017-05-23 15:15 ` Michael S. Tsirkin
2017-05-19 15:33 ` Stefan Hajnoczi
2017-05-22 2:27 ` Jason Wang
2017-05-22 11:46 ` Wang, Wei W
2017-05-23 2:08 ` Jason Wang
2017-05-23 5:47 ` Wei Wang
2017-05-23 6:32 ` Jason Wang
2017-05-23 10:48 ` Wei Wang
2017-05-24 3:24 ` Jason Wang
2017-05-24 8:31 ` Wei Wang
2017-05-25 7:59 ` Jason Wang
2017-05-25 12:01 ` Wei Wang
2017-05-25 12:22 ` Jason Wang
2017-05-25 12:31 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 17:57 ` Michael S. Tsirkin
2017-06-04 10:34 ` Wei Wang
2017-06-05 2:21 ` Michael S. Tsirkin
2017-05-25 14:35 ` [Qemu-devel] " Eric Blake
2017-05-26 4:26 ` Jason Wang
2017-05-19 16:49 ` Michael S. Tsirkin
2017-05-22 2:22 ` Jason Wang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.