qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [RFC 1/2] virtio: new post_load hook
@ 2019-10-10 18:04 Michael S. Tsirkin
  2019-10-10 18:04 ` [RFC 2/2] virtio-net: use post load hook Michael S. Tsirkin
  2019-10-11 13:15 ` [RFC 1/2] virtio: new post_load hook Alex Bennée
  0 siblings, 2 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2019-10-10 18:04 UTC (permalink / raw)
  To: qemu-devel; +Cc: mikhail.sennikovskii, dgilbert, stefanha

Post load hook in virtio vmsd is called early while device is processed,
and when VirtIODevice core isn't fully initialized.  Most device
specific code isn't ready to deal with a device in such state, and
behaves weirdly.

Add a new post_load hook in a device class instead.  Devices should use
this unless they specifically want to verify the migration stream as
it's processed, e.g. for bounds checking.

Suggested-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/virtio/virtio.c         | 7 +++++++
 include/hw/virtio/virtio.h | 6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 527df03bfd..54a46e204c 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -2291,6 +2291,13 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
     }
     rcu_read_unlock();
 
+    if (vdc->post_load) {
+        ret = vdc->post_load(vdev);
+        if (ret) {
+            return ret;
+        }
+    }
+
     return 0;
 }
 
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 48e8d04ff6..ca4f9c0bcc 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -158,6 +158,12 @@ typedef struct VirtioDeviceClass {
      */
     void (*save)(VirtIODevice *vdev, QEMUFile *f);
     int (*load)(VirtIODevice *vdev, QEMUFile *f, int version_id);
+    /* Post load hook in vmsd is called early while device is processed, and
+     * when VirtIODevice isn't fully initialized.  Devices should use this instead,
+     * unless they specifically want to verify the migration stream as it's
+     * processed, e.g. for bounds checking.
+     */
+    int (*post_load)(VirtIODevice *vdev);
     const VMStateDescription *vmsd;
 } VirtioDeviceClass;
 
-- 
MST



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC 2/2] virtio-net: use post load hook
  2019-10-10 18:04 [RFC 1/2] virtio: new post_load hook Michael S. Tsirkin
@ 2019-10-10 18:04 ` Michael S. Tsirkin
  2019-10-11  2:51   ` Jason Wang
  2019-10-11  9:46   ` Mikhail Sennikovsky
  2019-10-11 13:15 ` [RFC 1/2] virtio: new post_load hook Alex Bennée
  1 sibling, 2 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2019-10-10 18:04 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jason Wang, mikhail.sennikovskii, dgilbert, stefanha

Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
command are not preserved on VM migration.
Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
get enabled.
What happens is: first the VirtIONet::curr_guest_offloads gets restored
and offloads are getting set correctly:

 #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
 #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
 #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
 #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
     at migration/vmstate.c:168
 #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
 #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
 #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
 #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
 #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
 #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449

However later on the features are getting restored, and offloads get reset to
everything supported by features:

 #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
 #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
 #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
 #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
 #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
 #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
 #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
 #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
 #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
 #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449

Fix this by pushing out offload initialization to the new post load hook.

Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/net/virtio-net.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9f11422337..62fb858e2d 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
         n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
     }
 
-    if (peer_has_vnet_hdr(n)) {
-        virtio_net_apply_guest_offloads(n);
-    }
-
     virtio_net_set_queues(n);
 
     /* Find the first multicast entry in the saved MAC filter */
@@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
     return 0;
 }
 
+static int virtio_net_post_load_virtio(VirtIODevice *vdev)
+{
+    if (peer_has_vnet_hdr(n)) {
+        virtio_net_apply_guest_offloads(n);
+    }
+
+    return 0;
+}
+
 /* tx_waiting field of a VirtIONetQueue */
 static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
     .name = "virtio-net-queue-tx_waiting",
@@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
     vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
     vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
     vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
+    vdc->post_load = virtio_net_post_load_virtio;
     vdc->vmsd = &vmstate_virtio_net_device;
 }
 
-- 
MST



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-10 18:04 ` [RFC 2/2] virtio-net: use post load hook Michael S. Tsirkin
@ 2019-10-11  2:51   ` Jason Wang
  2019-10-11  9:46   ` Mikhail Sennikovsky
  1 sibling, 0 replies; 12+ messages in thread
From: Jason Wang @ 2019-10-11  2:51 UTC (permalink / raw)
  To: Michael S. Tsirkin, qemu-devel; +Cc: mikhail.sennikovskii, dgilbert, stefanha


On 2019/10/11 上午2:04, Michael S. Tsirkin wrote:
> Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> command are not preserved on VM migration.
> Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> get enabled.
> What happens is: first the VirtIONet::curr_guest_offloads gets restored
> and offloads are getting set correctly:
>
>   #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
>   #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
>   #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
>   #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
>       at migration/vmstate.c:168
>   #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
>   #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
>   #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
>   #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
>   #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
>   #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
>   #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
>   #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
>
> However later on the features are getting restored, and offloads get reset to
> everything supported by features:
>
>   #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
>   #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
>   #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
>   #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
>   #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
>   #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
>   #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
>   #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
>   #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
>   #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
>   #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
>   #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
>
> Fix this by pushing out offload initialization to the new post load hook.
>
> Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   hw/net/virtio-net.c | 14 ++++++++++----
>   1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 9f11422337..62fb858e2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>           n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
>       }
>   
> -    if (peer_has_vnet_hdr(n)) {
> -        virtio_net_apply_guest_offloads(n);
> -    }
> -
>       virtio_net_set_queues(n);
>   
>       /* Find the first multicast entry in the saved MAC filter */
> @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>       return 0;
>   }
>   
> +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> +{
> +    if (peer_has_vnet_hdr(n)) {
> +        virtio_net_apply_guest_offloads(n);
> +    }
> +
> +    return 0;
> +}
> +
>   /* tx_waiting field of a VirtIONetQueue */
>   static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
>       .name = "virtio-net-queue-tx_waiting",
> @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
>       vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
>       vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
>       vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> +    vdc->post_load = virtio_net_post_load_virtio;
>       vdc->vmsd = &vmstate_virtio_net_device;
>   }
>   


Looks good to me.

Thanks



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-10 18:04 ` [RFC 2/2] virtio-net: use post load hook Michael S. Tsirkin
  2019-10-11  2:51   ` Jason Wang
@ 2019-10-11  9:46   ` Mikhail Sennikovsky
  2019-10-11  9:51     ` Michael S. Tsirkin
  1 sibling, 1 reply; 12+ messages in thread
From: Mikhail Sennikovsky @ 2019-10-11  9:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

Hi Michael,

Unfortunately your approach will not work, because the
VirtIONet::curr_guest_offloads would still be reset in
virtio_net_set_features:
--
if (n->has_vnet_hdr) {
    n->curr_guest_offloads =
        virtio_net_guest_offloads_by_features(features);
--
( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )

I.e. although virtio_net_apply_guest_offloads would now be called
after the virtio_net_set_features, by the time it is called the
VirtIONet::curr_guest_offloads would be reset to a full list of
features.

Regards,
Mikhail

Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
>
> Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> command are not preserved on VM migration.
> Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> get enabled.
> What happens is: first the VirtIONet::curr_guest_offloads gets restored
> and offloads are getting set correctly:
>
>  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
>  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
>  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
>  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
>      at migration/vmstate.c:168
>  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
>  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
>  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
>  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
>  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
>  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
>  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
>  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
>
> However later on the features are getting restored, and offloads get reset to
> everything supported by features:
>
>  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
>  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
>  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
>  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
>  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
>  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
>  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
>  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
>  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
>  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
>  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
>  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
>
> Fix this by pushing out offload initialization to the new post load hook.
>
> Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  hw/net/virtio-net.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 9f11422337..62fb858e2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
>      }
>
> -    if (peer_has_vnet_hdr(n)) {
> -        virtio_net_apply_guest_offloads(n);
> -    }
> -
>      virtio_net_set_queues(n);
>
>      /* Find the first multicast entry in the saved MAC filter */
> @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>      return 0;
>  }
>
> +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> +{
> +    if (peer_has_vnet_hdr(n)) {
> +        virtio_net_apply_guest_offloads(n);
> +    }
> +
> +    return 0;
> +}
> +
>  /* tx_waiting field of a VirtIONetQueue */
>  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
>      .name = "virtio-net-queue-tx_waiting",
> @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
>      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
>      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
>      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> +    vdc->post_load = virtio_net_post_load_virtio;
>      vdc->vmsd = &vmstate_virtio_net_device;
>  }
>
> --
> MST
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11  9:46   ` Mikhail Sennikovsky
@ 2019-10-11  9:51     ` Michael S. Tsirkin
  2019-10-11  9:58       ` Mikhail Sennikovsky
  0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2019-10-11  9:51 UTC (permalink / raw)
  To: Mikhail Sennikovsky
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> Hi Michael,
> 
> Unfortunately your approach will not work, because the
> VirtIONet::curr_guest_offloads would still be reset in
> virtio_net_set_features:
> --
> if (n->has_vnet_hdr) {
>     n->curr_guest_offloads =
>         virtio_net_guest_offloads_by_features(features);

So let's move that part to the new hook too.

> --
> ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> 
> I.e. although virtio_net_apply_guest_offloads would now be called
> after the virtio_net_set_features, by the time it is called the
> VirtIONet::curr_guest_offloads would be reset to a full list of
> features.
> 
> Regards,
> Mikhail
> 
> Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> >
> > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > command are not preserved on VM migration.
> > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > get enabled.
> > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > and offloads are getting set correctly:
> >
> >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> >      at migration/vmstate.c:168
> >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> >
> > However later on the features are getting restored, and offloads get reset to
> > everything supported by features:
> >
> >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> >
> > Fix this by pushing out offload initialization to the new post load hook.
> >
> > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > ---
> >  hw/net/virtio-net.c | 14 ++++++++++----
> >  1 file changed, 10 insertions(+), 4 deletions(-)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 9f11422337..62fb858e2d 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> >      }
> >
> > -    if (peer_has_vnet_hdr(n)) {
> > -        virtio_net_apply_guest_offloads(n);
> > -    }
> > -
> >      virtio_net_set_queues(n);
> >
> >      /* Find the first multicast entry in the saved MAC filter */
> > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> >      return 0;
> >  }
> >
> > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > +{
> > +    if (peer_has_vnet_hdr(n)) {
> > +        virtio_net_apply_guest_offloads(n);
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> >  /* tx_waiting field of a VirtIONetQueue */
> >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> >      .name = "virtio-net-queue-tx_waiting",
> > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > +    vdc->post_load = virtio_net_post_load_virtio;
> >      vdc->vmsd = &vmstate_virtio_net_device;
> >  }
> >
> > --
> > MST
> >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11  9:51     ` Michael S. Tsirkin
@ 2019-10-11  9:58       ` Mikhail Sennikovsky
  2019-10-11 10:08         ` Michael S. Tsirkin
  0 siblings, 1 reply; 12+ messages in thread
From: Mikhail Sennikovsky @ 2019-10-11  9:58 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

Note that the virtio_net_set_features gets also called from the
virtio_pci_common_write when guest does virtio device configuration.
In that case the curr_guest_offloads are still expected to be reset.

Mikhail

Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
>
> On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > Hi Michael,
> >
> > Unfortunately your approach will not work, because the
> > VirtIONet::curr_guest_offloads would still be reset in
> > virtio_net_set_features:
> > --
> > if (n->has_vnet_hdr) {
> >     n->curr_guest_offloads =
> >         virtio_net_guest_offloads_by_features(features);
>
> So let's move that part to the new hook too.
>
> > --
> > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> >
> > I.e. although virtio_net_apply_guest_offloads would now be called
> > after the virtio_net_set_features, by the time it is called the
> > VirtIONet::curr_guest_offloads would be reset to a full list of
> > features.
> >
> > Regards,
> > Mikhail
> >
> > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > >
> > > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > command are not preserved on VM migration.
> > > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > > get enabled.
> > > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > > and offloads are getting set correctly:
> > >
> > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > >      at migration/vmstate.c:168
> > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > >
> > > However later on the features are getting restored, and offloads get reset to
> > > everything supported by features:
> > >
> > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > >
> > > Fix this by pushing out offload initialization to the new post load hook.
> > >
> > > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > ---
> > >  hw/net/virtio-net.c | 14 ++++++++++----
> > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 9f11422337..62fb858e2d 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> > >      }
> > >
> > > -    if (peer_has_vnet_hdr(n)) {
> > > -        virtio_net_apply_guest_offloads(n);
> > > -    }
> > > -
> > >      virtio_net_set_queues(n);
> > >
> > >      /* Find the first multicast entry in the saved MAC filter */
> > > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > >      return 0;
> > >  }
> > >
> > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > +{
> > > +    if (peer_has_vnet_hdr(n)) {
> > > +        virtio_net_apply_guest_offloads(n);
> > > +    }
> > > +
> > > +    return 0;
> > > +}
> > > +
> > >  /* tx_waiting field of a VirtIONetQueue */
> > >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > >      .name = "virtio-net-queue-tx_waiting",
> > > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > +    vdc->post_load = virtio_net_post_load_virtio;
> > >      vdc->vmsd = &vmstate_virtio_net_device;
> > >  }
> > >
> > > --
> > > MST
> > >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11  9:58       ` Mikhail Sennikovsky
@ 2019-10-11 10:08         ` Michael S. Tsirkin
  2019-10-11 10:30           ` Mikhail Sennikovsky
  0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2019-10-11 10:08 UTC (permalink / raw)
  To: Mikhail Sennikovsky
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

Ugh. Okay.
So let's add a new field saved_guest_offloads and have
vmstate use that instead of curr_guest_offloads.

Now the new post load hook can do

    n->curr_guest_offloads = n->saved_guest_offloads

and apply that.

And pre save hook can do n->saved_guest_offloads = n->curr_guest_offloads.


On Fri, Oct 11, 2019 at 11:58:38AM +0200, Mikhail Sennikovsky wrote:
> Note that the virtio_net_set_features gets also called from the
> virtio_pci_common_write when guest does virtio device configuration.
> In that case the curr_guest_offloads are still expected to be reset.
> 
> Mikhail
> 
> Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> >
> > On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > > Hi Michael,
> > >
> > > Unfortunately your approach will not work, because the
> > > VirtIONet::curr_guest_offloads would still be reset in
> > > virtio_net_set_features:
> > > --
> > > if (n->has_vnet_hdr) {
> > >     n->curr_guest_offloads =
> > >         virtio_net_guest_offloads_by_features(features);
> >
> > So let's move that part to the new hook too.
> >
> > > --
> > > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> > >
> > > I.e. although virtio_net_apply_guest_offloads would now be called
> > > after the virtio_net_set_features, by the time it is called the
> > > VirtIONet::curr_guest_offloads would be reset to a full list of
> > > features.
> > >
> > > Regards,
> > > Mikhail
> > >
> > > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > >
> > > > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > > command are not preserved on VM migration.
> > > > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > > > get enabled.
> > > > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > > > and offloads are getting set correctly:
> > > >
> > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> > > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > > >      at migration/vmstate.c:168
> > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > >
> > > > However later on the features are getting restored, and offloads get reset to
> > > > everything supported by features:
> > > >
> > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> > > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > >
> > > > Fix this by pushing out offload initialization to the new post load hook.
> > > >
> > > > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > ---
> > > >  hw/net/virtio-net.c | 14 ++++++++++----
> > > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > index 9f11422337..62fb858e2d 100644
> > > > --- a/hw/net/virtio-net.c
> > > > +++ b/hw/net/virtio-net.c
> > > > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> > > >      }
> > > >
> > > > -    if (peer_has_vnet_hdr(n)) {
> > > > -        virtio_net_apply_guest_offloads(n);
> > > > -    }
> > > > -
> > > >      virtio_net_set_queues(n);
> > > >
> > > >      /* Find the first multicast entry in the saved MAC filter */
> > > > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > >      return 0;
> > > >  }
> > > >
> > > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > > +{
> > > > +    if (peer_has_vnet_hdr(n)) {
> > > > +        virtio_net_apply_guest_offloads(n);
> > > > +    }
> > > > +
> > > > +    return 0;
> > > > +}
> > > > +
> > > >  /* tx_waiting field of a VirtIONetQueue */
> > > >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > > >      .name = "virtio-net-queue-tx_waiting",
> > > > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> > > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > > >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> > > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > > +    vdc->post_load = virtio_net_post_load_virtio;
> > > >      vdc->vmsd = &vmstate_virtio_net_device;
> > > >  }
> > > >
> > > > --
> > > > MST
> > > >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11 10:08         ` Michael S. Tsirkin
@ 2019-10-11 10:30           ` Mikhail Sennikovsky
  2019-10-11 10:34             ` Mikhail Sennikovsky
  2019-10-11 12:34             ` Michael S. Tsirkin
  0 siblings, 2 replies; 12+ messages in thread
From: Mikhail Sennikovsky @ 2019-10-11 10:30 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

Am Fr., 11. Okt. 2019 um 12:08 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
>...
> And pre save hook can do n->saved_guest_offloads = n->curr_guest_offloads.
Would you want to have the saved_guest_offloads as part of the saved state?
The curr_guest_offloads info is already there, so why would you want
to duplicate that?
Wouldn't it be better to just do n->saved_guest_offloads =
n->curr_guest_offloads in virtio_net_post_load_device,
and then do
    n->curr_guest_offloads = n->saved_guest_offloads;
    if (peer_has_vnet_hdr(n)) {
        virtio_net_apply_guest_offloads(n);
in the new post load hook (virtio_net_post_load_virtio) exactly like you say?

Mikhail

>
>
> On Fri, Oct 11, 2019 at 11:58:38AM +0200, Mikhail Sennikovsky wrote:
> > Note that the virtio_net_set_features gets also called from the
> > virtio_pci_common_write when guest does virtio device configuration.
> > In that case the curr_guest_offloads are still expected to be reset.
> >
> > Mikhail
> >
> > Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > >
> > > On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > > > Hi Michael,
> > > >
> > > > Unfortunately your approach will not work, because the
> > > > VirtIONet::curr_guest_offloads would still be reset in
> > > > virtio_net_set_features:
> > > > --
> > > > if (n->has_vnet_hdr) {
> > > >     n->curr_guest_offloads =
> > > >         virtio_net_guest_offloads_by_features(features);
> > >
> > > So let's move that part to the new hook too.
> > >
> > > > --
> > > > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> > > >
> > > > I.e. although virtio_net_apply_guest_offloads would now be called
> > > > after the virtio_net_set_features, by the time it is called the
> > > > VirtIONet::curr_guest_offloads would be reset to a full list of
> > > > features.
> > > >
> > > > Regards,
> > > > Mikhail
> > > >
> > > > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > > >
> > > > > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > > > command are not preserved on VM migration.
> > > > > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > > > > get enabled.
> > > > > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > > > > and offloads are getting set correctly:
> > > > >
> > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> > > > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > > > >      at migration/vmstate.c:168
> > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > >
> > > > > However later on the features are getting restored, and offloads get reset to
> > > > > everything supported by features:
> > > > >
> > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> > > > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > >
> > > > > Fix this by pushing out offload initialization to the new post load hook.
> > > > >
> > > > > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > ---
> > > > >  hw/net/virtio-net.c | 14 ++++++++++----
> > > > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > > > >
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 9f11422337..62fb858e2d 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> > > > >      }
> > > > >
> > > > > -    if (peer_has_vnet_hdr(n)) {
> > > > > -        virtio_net_apply_guest_offloads(n);
> > > > > -    }
> > > > > -
> > > > >      virtio_net_set_queues(n);
> > > > >
> > > > >      /* Find the first multicast entry in the saved MAC filter */
> > > > > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > >      return 0;
> > > > >  }
> > > > >
> > > > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > > > +{
> > > > > +    if (peer_has_vnet_hdr(n)) {
> > > > > +        virtio_net_apply_guest_offloads(n);
> > > > > +    }
> > > > > +
> > > > > +    return 0;
> > > > > +}
> > > > > +
> > > > >  /* tx_waiting field of a VirtIONetQueue */
> > > > >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > > > >      .name = "virtio-net-queue-tx_waiting",
> > > > > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> > > > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > > > >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> > > > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > > > +    vdc->post_load = virtio_net_post_load_virtio;
> > > > >      vdc->vmsd = &vmstate_virtio_net_device;
> > > > >  }
> > > > >
> > > > > --
> > > > > MST
> > > > >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11 10:30           ` Mikhail Sennikovsky
@ 2019-10-11 10:34             ` Mikhail Sennikovsky
  2019-10-11 12:51               ` Michael S. Tsirkin
  2019-10-11 12:34             ` Michael S. Tsirkin
  1 sibling, 1 reply; 12+ messages in thread
From: Mikhail Sennikovsky @ 2019-10-11 10:34 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

I still wonder though if this approach is really cleaner than my
original one of having an extra argument in set_features callback,
saying whether the device settings (offloads in case of virtio-net)
need to be reset.

Mikhail

Am Fr., 11. Okt. 2019 um 12:30 Uhr schrieb Mikhail Sennikovsky
<mikhail.sennikovskii@cloud.ionos.com>:
>
> Am Fr., 11. Okt. 2019 um 12:08 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> >...
> > And pre save hook can do n->saved_guest_offloads = n->curr_guest_offloads.
> Would you want to have the saved_guest_offloads as part of the saved state?
> The curr_guest_offloads info is already there, so why would you want
> to duplicate that?
> Wouldn't it be better to just do n->saved_guest_offloads =
> n->curr_guest_offloads in virtio_net_post_load_device,
> and then do
>     n->curr_guest_offloads = n->saved_guest_offloads;
>     if (peer_has_vnet_hdr(n)) {
>         virtio_net_apply_guest_offloads(n);
> in the new post load hook (virtio_net_post_load_virtio) exactly like you say?
>
> Mikhail
>
> >
> >
> > On Fri, Oct 11, 2019 at 11:58:38AM +0200, Mikhail Sennikovsky wrote:
> > > Note that the virtio_net_set_features gets also called from the
> > > virtio_pci_common_write when guest does virtio device configuration.
> > > In that case the curr_guest_offloads are still expected to be reset.
> > >
> > > Mikhail
> > >
> > > Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > >
> > > > On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > > > > Hi Michael,
> > > > >
> > > > > Unfortunately your approach will not work, because the
> > > > > VirtIONet::curr_guest_offloads would still be reset in
> > > > > virtio_net_set_features:
> > > > > --
> > > > > if (n->has_vnet_hdr) {
> > > > >     n->curr_guest_offloads =
> > > > >         virtio_net_guest_offloads_by_features(features);
> > > >
> > > > So let's move that part to the new hook too.
> > > >
> > > > > --
> > > > > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> > > > >
> > > > > I.e. although virtio_net_apply_guest_offloads would now be called
> > > > > after the virtio_net_set_features, by the time it is called the
> > > > > VirtIONet::curr_guest_offloads would be reset to a full list of
> > > > > features.
> > > > >
> > > > > Regards,
> > > > > Mikhail
> > > > >
> > > > > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > > > >
> > > > > > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > > > > command are not preserved on VM migration.
> > > > > > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > > > > > get enabled.
> > > > > > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > > > > > and offloads are getting set correctly:
> > > > > >
> > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> > > > > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > > > > >      at migration/vmstate.c:168
> > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > > >
> > > > > > However later on the features are getting restored, and offloads get reset to
> > > > > > everything supported by features:
> > > > > >
> > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > > >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> > > > > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > > >
> > > > > > Fix this by pushing out offload initialization to the new post load hook.
> > > > > >
> > > > > > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > ---
> > > > > >  hw/net/virtio-net.c | 14 ++++++++++----
> > > > > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > > > > >
> > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > index 9f11422337..62fb858e2d 100644
> > > > > > --- a/hw/net/virtio-net.c
> > > > > > +++ b/hw/net/virtio-net.c
> > > > > > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > > >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> > > > > >      }
> > > > > >
> > > > > > -    if (peer_has_vnet_hdr(n)) {
> > > > > > -        virtio_net_apply_guest_offloads(n);
> > > > > > -    }
> > > > > > -
> > > > > >      virtio_net_set_queues(n);
> > > > > >
> > > > > >      /* Find the first multicast entry in the saved MAC filter */
> > > > > > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > > >      return 0;
> > > > > >  }
> > > > > >
> > > > > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > > > > +{
> > > > > > +    if (peer_has_vnet_hdr(n)) {
> > > > > > +        virtio_net_apply_guest_offloads(n);
> > > > > > +    }
> > > > > > +
> > > > > > +    return 0;
> > > > > > +}
> > > > > > +
> > > > > >  /* tx_waiting field of a VirtIONetQueue */
> > > > > >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > > > > >      .name = "virtio-net-queue-tx_waiting",
> > > > > > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> > > > > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > > > > >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> > > > > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > > > > +    vdc->post_load = virtio_net_post_load_virtio;
> > > > > >      vdc->vmsd = &vmstate_virtio_net_device;
> > > > > >  }
> > > > > >
> > > > > > --
> > > > > > MST
> > > > > >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11 10:30           ` Mikhail Sennikovsky
  2019-10-11 10:34             ` Mikhail Sennikovsky
@ 2019-10-11 12:34             ` Michael S. Tsirkin
  1 sibling, 0 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2019-10-11 12:34 UTC (permalink / raw)
  To: Mikhail Sennikovsky
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

On Fri, Oct 11, 2019 at 12:30:07PM +0200, Mikhail Sennikovsky wrote:
> Am Fr., 11. Okt. 2019 um 12:08 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> >...
> > And pre save hook can do n->saved_guest_offloads = n->curr_guest_offloads.
> Would you want to have the saved_guest_offloads as part of the saved state?
> The curr_guest_offloads info is already there, so why would you want
> to duplicate that?
> Wouldn't it be better to just do n->saved_guest_offloads =
> n->curr_guest_offloads in virtio_net_post_load_device,
> and then do
>     n->curr_guest_offloads = n->saved_guest_offloads;
>     if (peer_has_vnet_hdr(n)) {
>         virtio_net_apply_guest_offloads(n);
> in the new post load hook (virtio_net_post_load_virtio) exactly like you say?
> 
> Mikhail

Fine by me, too.

> >
> >
> > On Fri, Oct 11, 2019 at 11:58:38AM +0200, Mikhail Sennikovsky wrote:
> > > Note that the virtio_net_set_features gets also called from the
> > > virtio_pci_common_write when guest does virtio device configuration.
> > > In that case the curr_guest_offloads are still expected to be reset.
> > >
> > > Mikhail
> > >
> > > Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > >
> > > > On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > > > > Hi Michael,
> > > > >
> > > > > Unfortunately your approach will not work, because the
> > > > > VirtIONet::curr_guest_offloads would still be reset in
> > > > > virtio_net_set_features:
> > > > > --
> > > > > if (n->has_vnet_hdr) {
> > > > >     n->curr_guest_offloads =
> > > > >         virtio_net_guest_offloads_by_features(features);
> > > >
> > > > So let's move that part to the new hook too.
> > > >
> > > > > --
> > > > > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> > > > >
> > > > > I.e. although virtio_net_apply_guest_offloads would now be called
> > > > > after the virtio_net_set_features, by the time it is called the
> > > > > VirtIONet::curr_guest_offloads would be reset to a full list of
> > > > > features.
> > > > >
> > > > > Regards,
> > > > > Mikhail
> > > > >
> > > > > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > > > >
> > > > > > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > > > > command are not preserved on VM migration.
> > > > > > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > > > > > get enabled.
> > > > > > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > > > > > and offloads are getting set correctly:
> > > > > >
> > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> > > > > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > > > > >      at migration/vmstate.c:168
> > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > > >
> > > > > > However later on the features are getting restored, and offloads get reset to
> > > > > > everything supported by features:
> > > > > >
> > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > > >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> > > > > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > > >
> > > > > > Fix this by pushing out offload initialization to the new post load hook.
> > > > > >
> > > > > > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > ---
> > > > > >  hw/net/virtio-net.c | 14 ++++++++++----
> > > > > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > > > > >
> > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > index 9f11422337..62fb858e2d 100644
> > > > > > --- a/hw/net/virtio-net.c
> > > > > > +++ b/hw/net/virtio-net.c
> > > > > > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > > >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> > > > > >      }
> > > > > >
> > > > > > -    if (peer_has_vnet_hdr(n)) {
> > > > > > -        virtio_net_apply_guest_offloads(n);
> > > > > > -    }
> > > > > > -
> > > > > >      virtio_net_set_queues(n);
> > > > > >
> > > > > >      /* Find the first multicast entry in the saved MAC filter */
> > > > > > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > > >      return 0;
> > > > > >  }
> > > > > >
> > > > > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > > > > +{
> > > > > > +    if (peer_has_vnet_hdr(n)) {
> > > > > > +        virtio_net_apply_guest_offloads(n);
> > > > > > +    }
> > > > > > +
> > > > > > +    return 0;
> > > > > > +}
> > > > > > +
> > > > > >  /* tx_waiting field of a VirtIONetQueue */
> > > > > >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > > > > >      .name = "virtio-net-queue-tx_waiting",
> > > > > > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> > > > > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > > > > >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> > > > > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > > > > +    vdc->post_load = virtio_net_post_load_virtio;
> > > > > >      vdc->vmsd = &vmstate_virtio_net_device;
> > > > > >  }
> > > > > >
> > > > > > --
> > > > > > MST
> > > > > >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 2/2] virtio-net: use post load hook
  2019-10-11 10:34             ` Mikhail Sennikovsky
@ 2019-10-11 12:51               ` Michael S. Tsirkin
  0 siblings, 0 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2019-10-11 12:51 UTC (permalink / raw)
  To: Mikhail Sennikovsky
  Cc: Jason Wang, qemu-devel, stefanha, Dr. David Alan Gilbert

On Fri, Oct 11, 2019 at 12:34:59PM +0200, Mikhail Sennikovsky wrote:
> I still wonder though if this approach is really cleaner than my
> original one of having an extra argument in set_features callback,
> saying whether the device settings (offloads in case of virtio-net)
> need to be reset.
> 
> Mikhail

I prefer this approach as I think it's generally a saner
way to restore the device state: first the virtio core,
then the device specific state.

We should consider not invoking set features callback
during load at all: its real purpose is to validate
the features, the _nocheck variant makes no sense to me.

But that's a bigger change.



> 
> Am Fr., 11. Okt. 2019 um 12:30 Uhr schrieb Mikhail Sennikovsky
> <mikhail.sennikovskii@cloud.ionos.com>:
> >
> > Am Fr., 11. Okt. 2019 um 12:08 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > >...
> > > And pre save hook can do n->saved_guest_offloads = n->curr_guest_offloads.
> > Would you want to have the saved_guest_offloads as part of the saved state?
> > The curr_guest_offloads info is already there, so why would you want
> > to duplicate that?
> > Wouldn't it be better to just do n->saved_guest_offloads =
> > n->curr_guest_offloads in virtio_net_post_load_device,
> > and then do
> >     n->curr_guest_offloads = n->saved_guest_offloads;
> >     if (peer_has_vnet_hdr(n)) {
> >         virtio_net_apply_guest_offloads(n);
> > in the new post load hook (virtio_net_post_load_virtio) exactly like you say?
> >
> > Mikhail
> >
> > >
> > >
> > > On Fri, Oct 11, 2019 at 11:58:38AM +0200, Mikhail Sennikovsky wrote:
> > > > Note that the virtio_net_set_features gets also called from the
> > > > virtio_pci_common_write when guest does virtio device configuration.
> > > > In that case the curr_guest_offloads are still expected to be reset.
> > > >
> > > > Mikhail
> > > >
> > > > Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > > >
> > > > > On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > > > > > Hi Michael,
> > > > > >
> > > > > > Unfortunately your approach will not work, because the
> > > > > > VirtIONet::curr_guest_offloads would still be reset in
> > > > > > virtio_net_set_features:
> > > > > > --
> > > > > > if (n->has_vnet_hdr) {
> > > > > >     n->curr_guest_offloads =
> > > > > >         virtio_net_guest_offloads_by_features(features);
> > > > >
> > > > > So let's move that part to the new hook too.
> > > > >
> > > > > > --
> > > > > > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 )
> > > > > >
> > > > > > I.e. although virtio_net_apply_guest_offloads would now be called
> > > > > > after the virtio_net_set_features, by the time it is called the
> > > > > > VirtIONet::curr_guest_offloads would be reset to a full list of
> > > > > > features.
> > > > > >
> > > > > > Regards,
> > > > > > Mikhail
> > > > > >
> > > > > > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin <mst@redhat.com>:
> > > > > > >
> > > > > > > Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > > > > > command are not preserved on VM migration.
> > > > > > > Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
> > > > > > > get enabled.
> > > > > > > What happens is: first the VirtIONet::curr_guest_offloads gets restored
> > > > > > > and offloads are getting set correctly:
> > > > > > >
> > > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
> > > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > > > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
> > > > > > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > > > > > >      at migration/vmstate.c:168
> > > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
> > > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > > > >
> > > > > > > However later on the features are getting restored, and offloads get reset to
> > > > > > > everything supported by features:
> > > > > > >
> > > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
> > > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
> > > > > > >  #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
> > > > > > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
> > > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
> > > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
> > > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
> > > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
> > > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
> > > > > > >  #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449
> > > > > > >
> > > > > > > Fix this by pushing out offload initialization to the new post load hook.
> > > > > > >
> > > > > > > Reported-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > ---
> > > > > > >  hw/net/virtio-net.c | 14 ++++++++++----
> > > > > > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > > > > > >
> > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > > index 9f11422337..62fb858e2d 100644
> > > > > > > --- a/hw/net/virtio-net.c
> > > > > > > +++ b/hw/net/virtio-net.c
> > > > > > > @@ -2333,10 +2333,6 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > > > >          n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
> > > > > > >      }
> > > > > > >
> > > > > > > -    if (peer_has_vnet_hdr(n)) {
> > > > > > > -        virtio_net_apply_guest_offloads(n);
> > > > > > > -    }
> > > > > > > -
> > > > > > >      virtio_net_set_queues(n);
> > > > > > >
> > > > > > >      /* Find the first multicast entry in the saved MAC filter */
> > > > > > > @@ -2370,6 +2366,15 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > > > >      return 0;
> > > > > > >  }
> > > > > > >
> > > > > > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > > > > > +{
> > > > > > > +    if (peer_has_vnet_hdr(n)) {
> > > > > > > +        virtio_net_apply_guest_offloads(n);
> > > > > > > +    }
> > > > > > > +
> > > > > > > +    return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > >  /* tx_waiting field of a VirtIONetQueue */
> > > > > > >  static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > > > > > >      .name = "virtio-net-queue-tx_waiting",
> > > > > > > @@ -2912,6 +2917,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
> > > > > > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > > > > > >      vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
> > > > > > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > > > > > +    vdc->post_load = virtio_net_post_load_virtio;
> > > > > > >      vdc->vmsd = &vmstate_virtio_net_device;
> > > > > > >  }
> > > > > > >
> > > > > > > --
> > > > > > > MST
> > > > > > >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC 1/2] virtio: new post_load hook
  2019-10-10 18:04 [RFC 1/2] virtio: new post_load hook Michael S. Tsirkin
  2019-10-10 18:04 ` [RFC 2/2] virtio-net: use post load hook Michael S. Tsirkin
@ 2019-10-11 13:15 ` Alex Bennée
  1 sibling, 0 replies; 12+ messages in thread
From: Alex Bennée @ 2019-10-11 13:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: mikhail.sennikovskii, dgilbert, stefanha


Michael S. Tsirkin <mst@redhat.com> writes:

> Post load hook in virtio vmsd is called early while device is processed,
> and when VirtIODevice core isn't fully initialized.  Most device
> specific code isn't ready to deal with a device in such state, and
> behaves weirdly.
>
> Add a new post_load hook in a device class instead.  Devices should use
> this unless they specifically want to verify the migration stream as
> it's processed, e.g. for bounds checking.
>
> Suggested-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> Cc: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  hw/virtio/virtio.c         | 7 +++++++
>  include/hw/virtio/virtio.h | 6 ++++++
>  2 files changed, 13 insertions(+)
>
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 527df03bfd..54a46e204c 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -2291,6 +2291,13 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
>      }
>      rcu_read_unlock();
>
> +    if (vdc->post_load) {
> +        ret = vdc->post_load(vdev);
> +        if (ret) {
> +            return ret;
> +        }

I see this pattern repeated above because we get early exits on error
but here it seems superfluous. Why not:

    return vdc->_post_load(vdev)

?


> +    }
> +
>      return 0;
>  }
>
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 48e8d04ff6..ca4f9c0bcc 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -158,6 +158,12 @@ typedef struct VirtioDeviceClass {
>       */
>      void (*save)(VirtIODevice *vdev, QEMUFile *f);
>      int (*load)(VirtIODevice *vdev, QEMUFile *f, int version_id);
> +    /* Post load hook in vmsd is called early while device is processed, and
> +     * when VirtIODevice isn't fully initialized.  Devices should use this instead,
> +     * unless they specifically want to verify the migration stream as it's
> +     * processed, e.g. for bounds checking.
> +     */
> +    int (*post_load)(VirtIODevice *vdev);
>      const VMStateDescription *vmsd;
>  } VirtioDeviceClass;


--
Alex Bennée


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-10-11 13:16 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-10 18:04 [RFC 1/2] virtio: new post_load hook Michael S. Tsirkin
2019-10-10 18:04 ` [RFC 2/2] virtio-net: use post load hook Michael S. Tsirkin
2019-10-11  2:51   ` Jason Wang
2019-10-11  9:46   ` Mikhail Sennikovsky
2019-10-11  9:51     ` Michael S. Tsirkin
2019-10-11  9:58       ` Mikhail Sennikovsky
2019-10-11 10:08         ` Michael S. Tsirkin
2019-10-11 10:30           ` Mikhail Sennikovsky
2019-10-11 10:34             ` Mikhail Sennikovsky
2019-10-11 12:51               ` Michael S. Tsirkin
2019-10-11 12:34             ` Michael S. Tsirkin
2019-10-11 13:15 ` [RFC 1/2] virtio: new post_load hook Alex Bennée

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).