All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] forbid dealing with net packets when VM is not running
@ 2014-08-14  6:13 zhanghailiang
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 1/3] net: Forbid dealing with " zhanghailiang
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-14  6:13 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, stefanha, mst, luonengjun, peter.huangpeng,
	aliguori, akong, zhanghailiang

Hi,

For all NICs (except virtio-net) emulated by qemu,such as e1000, rtl8139, 
pcnet and ne2k_pci, Qemu can still receive packets when VM is not running.

If this happened in *migration's* last PAUSE VM stage,
The new dirty RAM related to the packets will be missed,
And this will lead serious network fault in VM.

We have discussed this question long time ago, you can find it in
http://qemu.11.n7.nabble.com/PATCH-e1000-rtl8139-forbid-dealing-with-packets-when-VM-is-paused-td261168.html

The problem is still remain, we can reproduct the bug as the follow steps:
(1) Start a VM which configured at least one NIC 
(2) In VM, open several Terminal and do *Ping IP -i 0.1*
(3) Migrate the VM repeatly between two Hosts.
And the *PING* command in VM will very likely fail with message: 
'Destination HOST Unreachable', the NIC in VM will stay unavailable unless you 
run 'service network restart'.

This patch set is to solve this problem, and according to the suggestion of 
Peter Maydell and Stefan Hajnoczi, we implement this in net layer of qemu.

PS.
In the old discussion, Michael S. Tsirkin advanced that other devices may have 
the same problem, he suggested to stop all io threads with the VM when VM
is not running.

I have test serial, virtio-console, etc. And did not find any obvious fault in
the migration scene. 
Also the scheme of stopping all io threads is complex and maybe a lot work to do.

And for now i think it is ok to fix the obvious net bug first.


Thanks,
zhanghailiang

zhanghailiang (3):
  net: forbid dealing with packets when VM is not running
  net: Flush queues when runstate changes back to running
  virtio-net: Remove checking vm state in virtio_net_can_receive

 hw/net/virtio-net.c |  4 ----
 include/net/net.h   |  2 ++
 net/net.c           | 28 ++++++++++++++++++++++++++++
 3 files changed, 30 insertions(+), 4 deletions(-)

-- 
1.7.12.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 1/3] net: Forbid dealing with packets when VM is not running
  2014-08-14  6:13 [Qemu-devel] [PATCH 0/3] forbid dealing with net packets when VM is not running zhanghailiang
@ 2014-08-14  6:13 ` zhanghailiang
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 3/3] virtio-net: Remove checking vm state in virtio_net_can_receive zhanghailiang
  2 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-14  6:13 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, stefanha, mst, luonengjun, peter.huangpeng,
	aliguori, akong, zhanghailiang

For all NICs(except virtio-net) emulated by qemu,
Such as e1000, rtl8139, pcnet and ne2k_pci,
Qemu can still receive packets when VM is not running.

If this happened in *migration's* last PAUSE VM stage,
The new dirty RAM related to the packets will be missed,
And this will lead serious network fault in VM.

To avoid this, do things like virtio-net, and forbid receiving packets
in generic net code when VM is not running.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 include/net/net.h | 1 +
 net/net.c         | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index ed594f9..312f728 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -8,6 +8,7 @@
 #include "net/queue.h"
 #include "migration/vmstate.h"
 #include "qapi-types.h"
+#include "sysemu/sysemu.h"
 
 #define MAX_QUEUE_NUM 1024
 
diff --git a/net/net.c b/net/net.c
index 6d930ea..5bb2821 100644
--- a/net/net.c
+++ b/net/net.c
@@ -452,6 +452,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
 
 int qemu_can_send_packet(NetClientState *sender)
 {
+    int vmstat = runstate_is_running();
+
+    if (!vmstat) {
+        return 0;
+    }
+
     if (!sender->peer) {
         return 1;
     }
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14  6:13 [Qemu-devel] [PATCH 0/3] forbid dealing with net packets when VM is not running zhanghailiang
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 1/3] net: Forbid dealing with " zhanghailiang
@ 2014-08-14  6:13 ` zhanghailiang
  2014-08-14  7:12   ` Gonglei (Arei)
                     ` (2 more replies)
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 3/3] virtio-net: Remove checking vm state in virtio_net_can_receive zhanghailiang
  2 siblings, 3 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-14  6:13 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, stefanha, mst, luonengjun, peter.huangpeng,
	aliguori, akong, zhanghailiang

When the runstate changes back to running, we definitely need to flush
queues to get packets flowing again.

Here we implement this in the net layer:
(1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
Which will listen for VM runstate changes.
(2) Register a handler function for VMstate change.
When vm changes back to running, we flush all queues in the callback function.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 include/net/net.h |  1 +
 net/net.c         | 26 ++++++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 312f728..a294277 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -97,6 +97,7 @@ typedef struct NICState {
     NICConf *conf;
     void *opaque;
     bool peer_deleted;
+    VMChangeStateEntry *vmstate;
 } NICState;
 
 NetClientState *qemu_find_netdev(const char *id);
diff --git a/net/net.c b/net/net.c
index 5bb2821..506e58f 100644
--- a/net/net.c
+++ b/net/net.c
@@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
     return nc;
 }
 
+static void nic_vmstate_change_handler(void *opaque,
+                                       int running,
+                                       RunState state)
+{
+    NICState *nic = opaque;
+    NetClientState *nc;
+    int i, queues;
+
+    if (!running) {
+        return;
+    }
+
+    queues =  MAX(1, nic->conf->peers.queues);
+    for (i = 0; i < queues; i++) {
+        nc = &nic->ncs[i];
+        if (nc->receive_disabled
+            || (nc->info->can_receive && !nc->info->can_receive(nc))) {
+            continue;
+        }
+        qemu_flush_queued_packets(nc);
+    }
+}
+
 NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
@@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
     nic->ncs = (void *)nic + info->size;
     nic->conf = conf;
     nic->opaque = opaque;
+    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
+                                                    nic);
 
     for (i = 0; i < queues; i++) {
         qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
@@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
         qemu_free_net_client(nc);
     }
 
+    qemu_del_vm_change_state_handler(nic->vmstate);
     g_free(nic);
 }
 
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH 3/3] virtio-net: Remove checking vm state in virtio_net_can_receive
  2014-08-14  6:13 [Qemu-devel] [PATCH 0/3] forbid dealing with net packets when VM is not running zhanghailiang
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 1/3] net: Forbid dealing with " zhanghailiang
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
@ 2014-08-14  6:13 ` zhanghailiang
  2 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-14  6:13 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, stefanha, mst, luonengjun, peter.huangpeng,
	aliguori, akong, zhanghailiang

We will check vm-running state in the generic net layer.

Also we have to flush queues in nic_vmstate_change_handler() when vm state
changes to run. The check here will prevent the flushing process, because
we will check the return value of virtio_net_can_receive in
nic_vmstate_change_handler, When the vdev->vm_running is still false.
Actually it depends on the register sequence of callback functions.
Here nic_vmstate_change_handler will called before virtio_vmstate_change.

So remove the unnecessarily checking.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 hw/net/virtio-net.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 268eff9..287d762 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     VirtIONetQueue *q = virtio_net_get_subqueue(nc);
 
-    if (!vdev->vm_running) {
-        return 0;
-    }
-
     if (nc->queue_index >= n->curr_queues) {
         return 0;
     }
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
@ 2014-08-14  7:12   ` Gonglei (Arei)
  2014-08-14  8:24     ` zhanghailiang
  2014-08-14 10:05   ` Michael S. Tsirkin
  2014-08-14 10:09   ` Michael S. Tsirkin
  2 siblings, 1 reply; 10+ messages in thread
From: Gonglei (Arei) @ 2014-08-14  7:12 UTC (permalink / raw)
  To: Zhanghailiang, qemu-devel
  Cc: peter.maydell, stefanha, mst, Luonengjun, Huangpeng (Peter),
	aliguori, akong

Hi,

> Subject: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes
> back to running
> 
> When the runstate changes back to running, we definitely need to flush
> queues to get packets flowing again.
> 
> Here we implement this in the net layer:
> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.

Does this change will block migration during with different QEMU versions?

> (2) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> ---
>  include/net/net.h |  1 +
>  net/net.c         | 26 ++++++++++++++++++++++++++
>  2 files changed, 27 insertions(+)
> 
> diff --git a/include/net/net.h b/include/net/net.h
> index 312f728..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -97,6 +97,7 @@ typedef struct NICState {
>      NICConf *conf;
>      void *opaque;
>      bool peer_deleted;
> +    VMChangeStateEntry *vmstate;
>  } NICState;
> 
>  NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 5bb2821..506e58f 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo
> *info,
>      return nc;
>  }
> 
> +static void nic_vmstate_change_handler(void *opaque,
> +                                       int running,
> +                                       RunState state)
> +{
> +    NICState *nic = opaque;
> +    NetClientState *nc;
> +    int i, queues;
> +
> +    if (!running) {
> +        return;
> +    }
> +
> +    queues =  MAX(1, nic->conf->peers.queues);
              ^
A superfluous space.
 
Best regards,
-Gonglei

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14  7:12   ` Gonglei (Arei)
@ 2014-08-14  8:24     ` zhanghailiang
  0 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-14  8:24 UTC (permalink / raw)
  To: Gonglei (Arei)
  Cc: peter.maydell, aliguori, mst, Luonengjun, Huangpeng (Peter),
	qemu-devel, stefanha, akong

On 2014/8/14 15:12, Gonglei (Arei) wrote:
> Hi,
>
>> Subject: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes
>> back to running
>>
>> When the runstate changes back to running, we definitely need to flush
>> queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>
> Does this change will block migration during with different QEMU versions?
>

No, I have tested migration between qemu-1.5.1 with new qemu which has
merged this patches, everything seems ok!

>> (2) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>>
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>> ---
>>   include/net/net.h |  1 +
>>   net/net.c         | 26 ++++++++++++++++++++++++++
>>   2 files changed, 27 insertions(+)
>>
>> diff --git a/include/net/net.h b/include/net/net.h
>> index 312f728..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -97,6 +97,7 @@ typedef struct NICState {
>>       NICConf *conf;
>>       void *opaque;
>>       bool peer_deleted;
>> +    VMChangeStateEntry *vmstate;
>>   } NICState;
>>
>>   NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 5bb2821..506e58f 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo
>> *info,
>>       return nc;
>>   }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> +                                       int running,
>> +                                       RunState state)
>> +{
>> +    NICState *nic = opaque;
>> +    NetClientState *nc;
>> +    int i, queues;
>> +
>> +    if (!running) {
>> +        return;
>> +    }
>> +
>> +    queues =  MAX(1, nic->conf->peers.queues);
>                ^
> A superfluous space.
>

Yes, Good catch, i will modify this. Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
  2014-08-14  7:12   ` Gonglei (Arei)
@ 2014-08-14 10:05   ` Michael S. Tsirkin
  2014-08-18  0:45     ` zhanghailiang
  2014-08-14 10:09   ` Michael S. Tsirkin
  2 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2014-08-14 10:05 UTC (permalink / raw)
  To: zhanghailiang
  Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
	stefanha, akong

On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
> When the runstate changes back to running, we definitely need to flush
> queues to get packets flowing again.
> 
> Here we implement this in the net layer:
> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (2) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.

OK but smash this together with patch 1, otherwise
after patch 1 things are broken, which breaks
git bisect.

> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> ---
>  include/net/net.h |  1 +
>  net/net.c         | 26 ++++++++++++++++++++++++++
>  2 files changed, 27 insertions(+)
> 
> diff --git a/include/net/net.h b/include/net/net.h
> index 312f728..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -97,6 +97,7 @@ typedef struct NICState {
>      NICConf *conf;
>      void *opaque;
>      bool peer_deleted;
> +    VMChangeStateEntry *vmstate;
>  } NICState;
>  
>  NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 5bb2821..506e58f 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>      return nc;
>  }
>  
> +static void nic_vmstate_change_handler(void *opaque,
> +                                       int running,
> +                                       RunState state)
> +{
> +    NICState *nic = opaque;
> +    NetClientState *nc;
> +    int i, queues;
> +
> +    if (!running) {
> +        return;
> +    }
> +
> +    queues =  MAX(1, nic->conf->peers.queues);
> +    for (i = 0; i < queues; i++) {
> +        nc = &nic->ncs[i];
> +        if (nc->receive_disabled
> +            || (nc->info->can_receive && !nc->info->can_receive(nc))) {
> +            continue;
> +        }
> +        qemu_flush_queued_packets(nc);
> +    }
> +}
> +
>  NICState *qemu_new_nic(NetClientInfo *info,
>                         NICConf *conf,
>                         const char *model,
> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>      nic->ncs = (void *)nic + info->size;
>      nic->conf = conf;
>      nic->opaque = opaque;
> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> +                                                    nic);
>  
>      for (i = 0; i < queues; i++) {
>          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>          qemu_free_net_client(nc);
>      }
>  
> +    qemu_del_vm_change_state_handler(nic->vmstate);
>      g_free(nic);
>  }
>  
> -- 
> 1.7.12.4
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14  6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
  2014-08-14  7:12   ` Gonglei (Arei)
  2014-08-14 10:05   ` Michael S. Tsirkin
@ 2014-08-14 10:09   ` Michael S. Tsirkin
  2014-08-18  0:46     ` zhanghailiang
  2 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2014-08-14 10:09 UTC (permalink / raw)
  To: zhanghailiang
  Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
	stefanha, akong

On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
> When the runstate changes back to running, we definitely need to flush
> queues to get packets flowing again.
> 
> Here we implement this in the net layer:
> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (2) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>

Hmm looks like virtio patch will need to be squashed as well?

> ---
>  include/net/net.h |  1 +
>  net/net.c         | 26 ++++++++++++++++++++++++++
>  2 files changed, 27 insertions(+)
> 
> diff --git a/include/net/net.h b/include/net/net.h
> index 312f728..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -97,6 +97,7 @@ typedef struct NICState {
>      NICConf *conf;
>      void *opaque;
>      bool peer_deleted;
> +    VMChangeStateEntry *vmstate;
>  } NICState;
>  
>  NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 5bb2821..506e58f 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>      return nc;
>  }
>  
> +static void nic_vmstate_change_handler(void *opaque,
> +                                       int running,
> +                                       RunState state)
> +{
> +    NICState *nic = opaque;
> +    NetClientState *nc;
> +    int i, queues;
> +
> +    if (!running) {
> +        return;
> +    }
> +
> +    queues =  MAX(1, nic->conf->peers.queues);
> +    for (i = 0; i < queues; i++) {
> +        nc = &nic->ncs[i];
> +        if (nc->receive_disabled
> +            || (nc->info->can_receive && !nc->info->can_receive(nc))) {
> +            continue;
> +        }
> +        qemu_flush_queued_packets(nc);
> +    }
> +}
> +
>  NICState *qemu_new_nic(NetClientInfo *info,
>                         NICConf *conf,
>                         const char *model,
> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>      nic->ncs = (void *)nic + info->size;
>      nic->conf = conf;
>      nic->opaque = opaque;
> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> +                                                    nic);
>  
>      for (i = 0; i < queues; i++) {
>          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>          qemu_free_net_client(nc);
>      }
>  
> +    qemu_del_vm_change_state_handler(nic->vmstate);
>      g_free(nic);
>  }
>  
> -- 
> 1.7.12.4
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14 10:05   ` Michael S. Tsirkin
@ 2014-08-18  0:45     ` zhanghailiang
  0 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-18  0:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
	stefanha, akong

On 2014/8/14 18:05, Michael S. Tsirkin wrote:
> On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
>> When the runstate changes back to running, we definitely need to flush
>> queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>> (2) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>
> OK but smash this together with patch 1, otherwise
> after patch 1 things are broken, which breaks
> git bisect.
>

Hmm, i will put them together into one patch. Thanks for your reviewing.

>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>> ---
>>   include/net/net.h |  1 +
>>   net/net.c         | 26 ++++++++++++++++++++++++++
>>   2 files changed, 27 insertions(+)
>>
>> diff --git a/include/net/net.h b/include/net/net.h
>> index 312f728..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -97,6 +97,7 @@ typedef struct NICState {
>>       NICConf *conf;
>>       void *opaque;
>>       bool peer_deleted;
>> +    VMChangeStateEntry *vmstate;
>>   } NICState;
>>
>>   NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 5bb2821..506e58f 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>>       return nc;
>>   }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> +                                       int running,
>> +                                       RunState state)
>> +{
>> +    NICState *nic = opaque;
>> +    NetClientState *nc;
>> +    int i, queues;
>> +
>> +    if (!running) {
>> +        return;
>> +    }
>> +
>> +    queues =  MAX(1, nic->conf->peers.queues);
>> +    for (i = 0; i<  queues; i++) {
>> +        nc =&nic->ncs[i];
>> +        if (nc->receive_disabled
>> +            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
>> +            continue;
>> +        }
>> +        qemu_flush_queued_packets(nc);
>> +    }
>> +}
>> +
>>   NICState *qemu_new_nic(NetClientInfo *info,
>>                          NICConf *conf,
>>                          const char *model,
>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>       nic->ncs = (void *)nic + info->size;
>>       nic->conf = conf;
>>       nic->opaque = opaque;
>> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>> +                                                    nic);
>>
>>       for (i = 0; i<  queues; i++) {
>>           qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>           qemu_free_net_client(nc);
>>       }
>>
>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>       g_free(nic);
>>   }
>>
>> --
>> 1.7.12.4
>>
>
> .
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
  2014-08-14 10:09   ` Michael S. Tsirkin
@ 2014-08-18  0:46     ` zhanghailiang
  0 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-18  0:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
	stefanha, akong

On 2014/8/14 18:09, Michael S. Tsirkin wrote:
> On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
>> When the runstate changes back to running, we definitely need to flush
>> queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>> (2) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>>
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>
> Hmm looks like virtio patch will need to be squashed as well?
>

OK, thanks.

>> ---
>>   include/net/net.h |  1 +
>>   net/net.c         | 26 ++++++++++++++++++++++++++
>>   2 files changed, 27 insertions(+)
>>
>> diff --git a/include/net/net.h b/include/net/net.h
>> index 312f728..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -97,6 +97,7 @@ typedef struct NICState {
>>       NICConf *conf;
>>       void *opaque;
>>       bool peer_deleted;
>> +    VMChangeStateEntry *vmstate;
>>   } NICState;
>>
>>   NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 5bb2821..506e58f 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>>       return nc;
>>   }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> +                                       int running,
>> +                                       RunState state)
>> +{
>> +    NICState *nic = opaque;
>> +    NetClientState *nc;
>> +    int i, queues;
>> +
>> +    if (!running) {
>> +        return;
>> +    }
>> +
>> +    queues =  MAX(1, nic->conf->peers.queues);
>> +    for (i = 0; i<  queues; i++) {
>> +        nc =&nic->ncs[i];
>> +        if (nc->receive_disabled
>> +            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
>> +            continue;
>> +        }
>> +        qemu_flush_queued_packets(nc);
>> +    }
>> +}
>> +
>>   NICState *qemu_new_nic(NetClientInfo *info,
>>                          NICConf *conf,
>>                          const char *model,
>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>       nic->ncs = (void *)nic + info->size;
>>       nic->conf = conf;
>>       nic->opaque = opaque;
>> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>> +                                                    nic);
>>
>>       for (i = 0; i<  queues; i++) {
>>           qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>           qemu_free_net_client(nc);
>>       }
>>
>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>       g_free(nic);
>>   }
>>
>> --
>> 1.7.12.4
>>
>
> .
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-08-18  0:46 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-14  6:13 [Qemu-devel] [PATCH 0/3] forbid dealing with net packets when VM is not running zhanghailiang
2014-08-14  6:13 ` [Qemu-devel] [PATCH 1/3] net: Forbid dealing with " zhanghailiang
2014-08-14  6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
2014-08-14  7:12   ` Gonglei (Arei)
2014-08-14  8:24     ` zhanghailiang
2014-08-14 10:05   ` Michael S. Tsirkin
2014-08-18  0:45     ` zhanghailiang
2014-08-14 10:09   ` Michael S. Tsirkin
2014-08-18  0:46     ` zhanghailiang
2014-08-14  6:13 ` [Qemu-devel] [PATCH 3/3] virtio-net: Remove checking vm state in virtio_net_can_receive zhanghailiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.