* [Qemu-devel] [PULL 1/7] net: Forbid dealing with packets when VM is not running
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 2/7] net: don't use set/get_pointer() in set/get_netdev() Stefan Hajnoczi
` (5 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Maydell, zhanghailiang, Stefan Hajnoczi
From: zhanghailiang <zhang.zhanghailiang@huawei.com>
For all NICs(except virtio-net) emulated by qemu,
Such as e1000, rtl8139, pcnet and ne2k_pci,
Qemu can still receive packets when VM is not running.
If this happened in *migration's* last PAUSE VM stage, but
before the end of the migration, the new receiving packets will possibly dirty
parts of RAM which has been cached in *iovec*(will be sent asynchronously) and
dirty parts of new RAM which will be missed.
This will lead serious network fault in VM.
To avoid this, we forbid receiving packets in generic net code when
VM is not running.
Bug reproduction steps:
(1) Start a VM which configured at least one NIC
(2) In VM, open several Terminal and do *Ping IP -i 0.1*
(3) Migrate the VM repeatedly between two Hosts
And the *PING* command in VM will very likely fail with message:
'Destination HOST Unreachable', the NIC in VM will stay unavailable unless you
run 'service network restart'
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
net/net.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/net/net.c b/net/net.c
index 6d930ea..962c05f 100644
--- a/net/net.c
+++ b/net/net.c
@@ -41,6 +41,7 @@
#include "qapi-visit.h"
#include "qapi/opts-visitor.h"
#include "qapi/dealloc-visitor.h"
+#include "sysemu/sysemu.h"
/* Net bridge is currently not supported for W32. */
#if !defined(_WIN32)
@@ -452,6 +453,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
int qemu_can_send_packet(NetClientState *sender)
{
+ int vm_running = runstate_is_running();
+
+ if (!vm_running) {
+ return 0;
+ }
+
if (!sender->peer) {
return 1;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PULL 2/7] net: don't use set/get_pointer() in set/get_netdev()
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 1/7] net: Forbid dealing with packets when VM is not running Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 3/7] virtio-net: don't run bh on vm stopped Stefan Hajnoczi
` (4 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Maydell, Jason Wang, Markus Armbruster, Stefan Hajnoczi
From: Jason Wang <jasowang@redhat.com>
Commit 1ceef9f27359cbe92ef124bf74de6f792e71f6fb (net: multiqueue
support) tries to use set_pointer() and get_pointer() to set and get
NICPeers which is not a pointer defined in DEFINE_PROP_NETDEV. This
trick works but result a unclean and fragile implementation (e.g
print_netdev and parse_netdev).
This patch solves this issue by not using set/get_pinter() and set and
get netdev directly in set_netdev() and get_netdev(). After this the
parse_netdev() and print_netdev() were no longer used and dropped from
the source.
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/core/qdev-properties-system.c | 71 ++++++++++++++++++++++------------------
1 file changed, 39 insertions(+), 32 deletions(-)
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index ae0900f..b3753ce 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -176,41 +176,67 @@ PropertyInfo qdev_prop_chr = {
};
/* --- netdev device --- */
+static void get_netdev(Object *obj, Visitor *v, void *opaque,
+ const char *name, Error **errp)
+{
+ DeviceState *dev = DEVICE(obj);
+ Property *prop = opaque;
+ NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+ char *p = g_strdup(peers_ptr->ncs[0]->name);
-static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
+ visit_type_str(v, &p, name, errp);
+ g_free(p);
+}
+
+static void set_netdev(Object *obj, Visitor *v, void *opaque,
+ const char *name, Error **errp)
{
- NICPeers *peers_ptr = (NICPeers *)ptr;
+ DeviceState *dev = DEVICE(obj);
+ Property *prop = opaque;
+ NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
NetClientState **ncs = peers_ptr->ncs;
NetClientState *peers[MAX_QUEUE_NUM];
- int queues, i = 0;
- int ret;
+ Error *local_err = NULL;
+ int err, queues, i = 0;
+ char *str;
+
+ if (dev->realized) {
+ qdev_prop_set_after_realize(dev, name, errp);
+ return;
+ }
+
+ visit_type_str(v, &str, name, &local_err);
+ if (local_err) {
+ error_propagate(errp, local_err);
+ return;
+ }
queues = qemu_find_net_clients_except(str, peers,
NET_CLIENT_OPTIONS_KIND_NIC,
MAX_QUEUE_NUM);
if (queues == 0) {
- ret = -ENOENT;
+ err = -ENOENT;
goto err;
}
if (queues > MAX_QUEUE_NUM) {
- ret = -E2BIG;
+ err = -E2BIG;
goto err;
}
for (i = 0; i < queues; i++) {
if (peers[i] == NULL) {
- ret = -ENOENT;
+ err = -ENOENT;
goto err;
}
if (peers[i]->peer) {
- ret = -EEXIST;
+ err = -EEXIST;
goto err;
}
if (ncs[i]) {
- ret = -EINVAL;
+ err = -EINVAL;
goto err;
}
@@ -219,31 +245,12 @@ static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
}
peers_ptr->queues = queues;
-
- return 0;
+ g_free(str);
+ return;
err:
- return ret;
-}
-
-static char *print_netdev(void *ptr)
-{
- NetClientState *netdev = ptr;
- const char *val = netdev->name ? netdev->name : "";
-
- return g_strdup(val);
-}
-
-static void get_netdev(Object *obj, Visitor *v, void *opaque,
- const char *name, Error **errp)
-{
- get_pointer(obj, v, opaque, print_netdev, name, errp);
-}
-
-static void set_netdev(Object *obj, Visitor *v, void *opaque,
- const char *name, Error **errp)
-{
- set_pointer(obj, v, opaque, parse_netdev, name, errp);
+ error_set_from_qdev_prop_error(errp, err, dev, prop, str);
+ g_free(str);
}
PropertyInfo qdev_prop_netdev = {
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PULL 3/7] virtio-net: don't run bh on vm stopped
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 1/7] net: Forbid dealing with packets when VM is not running Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 2/7] net: don't use set/get_pointer() in set/get_netdev() Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 4/7] virtio: don't call device on !vm_running Stefan Hajnoczi
` (3 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, qemu-stable, Stefan Hajnoczi, Michael S. Tsirkin
From: "Michael S. Tsirkin" <mst@redhat.com>
commit 783e7706937fe15523b609b545587a028a2bdd03
virtio-net: stop/start bh when appropriate
is incomplete: BH might execute within the same main loop iteration but
after vmstop, so in theory, we might trigger an assertion.
I was unable to reproduce this in practice,
but it seems clear enough that the potential is there, so worth fixing.
Cc: qemu-stable@nongnu.org
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/net/virtio-net.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 268eff9..365e266 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1224,7 +1224,12 @@ static void virtio_net_tx_timer(void *opaque)
VirtIONetQueue *q = opaque;
VirtIONet *n = q->n;
VirtIODevice *vdev = VIRTIO_DEVICE(n);
- assert(vdev->vm_running);
+ /* This happens when device was stopped but BH wasn't. */
+ if (!vdev->vm_running) {
+ /* Make sure tx waiting is set, so we'll run when restarted. */
+ assert(q->tx_waiting);
+ return;
+ }
q->tx_waiting = 0;
@@ -1244,7 +1249,12 @@ static void virtio_net_tx_bh(void *opaque)
VirtIODevice *vdev = VIRTIO_DEVICE(n);
int32_t ret;
- assert(vdev->vm_running);
+ /* This happens when device was stopped but BH wasn't. */
+ if (!vdev->vm_running) {
+ /* Make sure tx waiting is set, so we'll run when restarted. */
+ assert(q->tx_waiting);
+ return;
+ }
q->tx_waiting = 0;
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PULL 4/7] virtio: don't call device on !vm_running
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
` (2 preceding siblings ...)
2014-09-04 15:50 ` [Qemu-devel] [PULL 3/7] virtio-net: don't run bh on vm stopped Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 5/7] net: invoke callback when purging queue Stefan Hajnoczi
` (2 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Jason Wang, qemu-stable, Stefan Hajnoczi,
Michael S. Tsirkin
From: "Michael S. Tsirkin" <mst@redhat.com>
On vm stop, virtio changes vm_running state
too soon, so callbacks can get envoked with
vm_running = false;
Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/virtio/virtio.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 5c98180..ac22238 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1108,7 +1108,10 @@ static void virtio_vmstate_change(void *opaque, int running, RunState state)
BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
bool backend_run = running && (vdev->status & VIRTIO_CONFIG_S_DRIVER_OK);
- vdev->vm_running = running;
+
+ if (running) {
+ vdev->vm_running = running;
+ }
if (backend_run) {
virtio_set_status(vdev, vdev->status);
@@ -1121,6 +1124,10 @@ static void virtio_vmstate_change(void *opaque, int running, RunState state)
if (!backend_run) {
virtio_set_status(vdev, vdev->status);
}
+
+ if (!running) {
+ vdev->vm_running = running;
+ }
}
void virtio_init(VirtIODevice *vdev, const char *name,
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PULL 5/7] net: invoke callback when purging queue
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
` (3 preceding siblings ...)
2014-09-04 15:50 ` [Qemu-devel] [PULL 4/7] virtio: don't call device on !vm_running Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 6/7] net: complete all queued packets on VM stop Stefan Hajnoczi
2014-09-04 15:50 ` [Qemu-devel] [PULL 7/7] virtio-net: purge outstanding packets when starting vhost Stefan Hajnoczi
6 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Jason Wang, qemu-stable, Stefan Hajnoczi,
Michael S. Tsirkin
From: "Michael S. Tsirkin" <mst@redhat.com>
devices rely on packet callbacks eventually running,
but we violate this rule whenever we purge the queue.
To fix, invoke callbacks on all packets on purge.
Set length to 0, this way callers can detect that
this happened and re-queue if necessary.
Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
net/queue.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/queue.c b/net/queue.c
index 859d02a..f948318 100644
--- a/net/queue.c
+++ b/net/queue.c
@@ -233,6 +233,9 @@ void qemu_net_queue_purge(NetQueue *queue, NetClientState *from)
if (packet->sender == from) {
QTAILQ_REMOVE(&queue->packets, packet, entry);
queue->nq_count--;
+ if (packet->sent_cb) {
+ packet->sent_cb(packet->sender, 0);
+ }
g_free(packet);
}
}
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Qemu-devel] [PULL 6/7] net: complete all queued packets on VM stop
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
` (4 preceding siblings ...)
2014-09-04 15:50 ` [Qemu-devel] [PULL 5/7] net: invoke callback when purging queue Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
2014-09-09 6:05 ` Jason Wang
2014-09-04 15:50 ` [Qemu-devel] [PULL 7/7] virtio-net: purge outstanding packets when starting vhost Stefan Hajnoczi
6 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Jason Wang, qemu-stable, Stefan Hajnoczi,
Michael S. Tsirkin
From: "Michael S. Tsirkin" <mst@redhat.com>
This completes all packets, ensuring that callbacks
will not run when VM is stopped.
Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
net/net.c | 33 ++++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/net/net.c b/net/net.c
index 962c05f..7acc162 100644
--- a/net/net.c
+++ b/net/net.c
@@ -48,6 +48,7 @@
# define CONFIG_NET_BRIDGE
#endif
+static VMChangeStateEntry *net_change_state_entry;
static QTAILQ_HEAD(, NetClientState) net_clients;
const char *host_net_devices[] = {
@@ -511,7 +512,8 @@ void qemu_purge_queued_packets(NetClientState *nc)
qemu_net_queue_purge(nc->peer->incoming_queue, nc);
}
-void qemu_flush_queued_packets(NetClientState *nc)
+static
+void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge)
{
nc->receive_disabled = 0;
@@ -525,9 +527,17 @@ void qemu_flush_queued_packets(NetClientState *nc)
* the file descriptor (for tap, for example).
*/
qemu_notify_event();
+ } else if (purge) {
+ /* Unable to empty the queue, purge remaining packets */
+ qemu_net_queue_purge(nc->incoming_queue, nc);
}
}
+void qemu_flush_queued_packets(NetClientState *nc)
+{
+ qemu_flush_or_purge_queued_packets(nc, false);
+}
+
static ssize_t qemu_send_packet_async_with_flags(NetClientState *sender,
unsigned flags,
const uint8_t *buf, int size,
@@ -1175,6 +1185,22 @@ void qmp_set_link(const char *name, bool up, Error **errp)
}
}
+static void net_vm_change_state_handler(void *opaque, int running,
+ RunState state)
+{
+ /* Complete all queued packets, to guarantee we don't modify
+ * state later when VM is not running.
+ */
+ if (!running) {
+ NetClientState *nc;
+ NetClientState *tmp;
+
+ QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) {
+ qemu_flush_or_purge_queued_packets(nc, true);
+ }
+ }
+}
+
void net_cleanup(void)
{
NetClientState *nc;
@@ -1190,6 +1216,8 @@ void net_cleanup(void)
qemu_del_net_client(nc);
}
}
+
+ qemu_del_vm_change_state_handler(net_change_state_entry);
}
void net_check_clients(void)
@@ -1275,6 +1303,9 @@ int net_init_clients(void)
#endif
}
+ net_change_state_entry =
+ qemu_add_vm_change_state_handler(net_vm_change_state_handler, NULL);
+
QTAILQ_INIT(&net_clients);
if (qemu_opts_foreach(qemu_find_opts("netdev"), net_init_netdev, NULL, 1) == -1)
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PULL 6/7] net: complete all queued packets on VM stop
2014-09-04 15:50 ` [Qemu-devel] [PULL 6/7] net: complete all queued packets on VM stop Stefan Hajnoczi
@ 2014-09-09 6:05 ` Jason Wang
0 siblings, 0 replies; 17+ messages in thread
From: Jason Wang @ 2014-09-09 6:05 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Peter Maydell, qemu-stable, Michael S. Tsirkin
On 09/04/2014 11:50 PM, Stefan Hajnoczi wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> This completes all packets, ensuring that callbacks
> will not run when VM is stopped.
>
> Cc: qemu-stable@nongnu.org
> Cc: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> net/net.c | 33 ++++++++++++++++++++++++++++++++-
> 1 file changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/net/net.c b/net/net.c
> index 962c05f..7acc162 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -48,6 +48,7 @@
> # define CONFIG_NET_BRIDGE
> #endif
>
> +static VMChangeStateEntry *net_change_state_entry;
> static QTAILQ_HEAD(, NetClientState) net_clients;
>
> const char *host_net_devices[] = {
> @@ -511,7 +512,8 @@ void qemu_purge_queued_packets(NetClientState *nc)
> qemu_net_queue_purge(nc->peer->incoming_queue, nc);
> }
>
> -void qemu_flush_queued_packets(NetClientState *nc)
> +static
> +void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge)
> {
> nc->receive_disabled = 0;
>
> @@ -525,9 +527,17 @@ void qemu_flush_queued_packets(NetClientState *nc)
> * the file descriptor (for tap, for example).
> */
> qemu_notify_event();
> + } else if (purge) {
> + /* Unable to empty the queue, purge remaining packets */
> + qemu_net_queue_purge(nc->incoming_queue, nc);
> }
> }
>
> +void qemu_flush_queued_packets(NetClientState *nc)
> +{
> + qemu_flush_or_purge_queued_packets(nc, false);
> +}
> +
> static ssize_t qemu_send_packet_async_with_flags(NetClientState *sender,
> unsigned flags,
> const uint8_t *buf, int size,
> @@ -1175,6 +1185,22 @@ void qmp_set_link(const char *name, bool up, Error **errp)
> }
> }
>
> +static void net_vm_change_state_handler(void *opaque, int running,
> + RunState state)
> +{
> + /* Complete all queued packets, to guarantee we don't modify
> + * state later when VM is not running.
> + */
> + if (!running) {
> + NetClientState *nc;
> + NetClientState *tmp;
> +
> + QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) {
> + qemu_flush_or_purge_queued_packets(nc, true);
> + }
> + }
> +}
Something like net_drain_all_queue() in do_vm_stop() looks simpler. And
doing this is tricky if it depends on other handlers to be called first.
E.g virtio-net vm change state handler will set vm_running to false. And
net_vm_change_state_handler() will be called after this. This means
virtio_net_flush_tx() will still hit the assert since it will be called
by packet cb (virtio_net_tx_complete()).
> +
> void net_cleanup(void)
> {
> NetClientState *nc;
> @@ -1190,6 +1216,8 @@ void net_cleanup(void)
> qemu_del_net_client(nc);
> }
> }
> +
> + qemu_del_vm_change_state_handler(net_change_state_entry);
> }
>
> void net_check_clients(void)
> @@ -1275,6 +1303,9 @@ int net_init_clients(void)
> #endif
> }
>
> + net_change_state_entry =
> + qemu_add_vm_change_state_handler(net_vm_change_state_handler, NULL);
> +
> QTAILQ_INIT(&net_clients);
>
> if (qemu_opts_foreach(qemu_find_opts("netdev"), net_init_netdev, NULL, 1) == -1)
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] [PULL 7/7] virtio-net: purge outstanding packets when starting vhost
2014-09-04 15:50 [Qemu-devel] [PULL 0/7] Net patches Stefan Hajnoczi
` (5 preceding siblings ...)
2014-09-04 15:50 ` [Qemu-devel] [PULL 6/7] net: complete all queued packets on VM stop Stefan Hajnoczi
@ 2014-09-04 15:50 ` Stefan Hajnoczi
6 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2014-09-04 15:50 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Jason Wang, qemu-stable, Stefan Hajnoczi,
Michael S. Tsirkin
From: "Michael S. Tsirkin" <mst@redhat.com>
whenever we start vhost, virtio could have outstanding packets
queued, when they complete later we'll modify the ring
while vhost is processing it.
To prevent this, purge outstanding packets on vhost start.
Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/net/virtio-net.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 365e266..826a2a5 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -125,10 +125,23 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
return;
}
if (!n->vhost_started) {
- int r;
+ int r, i;
+
if (!vhost_net_query(get_vhost_net(nc->peer), vdev)) {
return;
}
+
+ /* Any packets outstanding? Purge them to avoid touching rings
+ * when vhost is running.
+ */
+ for (i = 0; i < queues; i++) {
+ NetClientState *qnc = qemu_get_subqueue(n->nic, i);
+
+ /* Purge both directions: TX and RX. */
+ qemu_net_queue_purge(qnc->peer->incoming_queue, qnc);
+ qemu_net_queue_purge(qnc->incoming_queue, qnc->peer);
+ }
+
n->vhost_started = 1;
r = vhost_net_start(vdev, n->nic->ncs, queues);
if (r < 0) {
--
1.9.3
^ permalink raw reply related [flat|nested] 17+ messages in thread