All of lore.kernel.org
 help / color / mirror / Atom feed
* [PULL 0/3] Net patches
@ 2022-07-26  8:50 Jason Wang
  2022-07-26  8:50 ` [PULL 1/3] e1000e: Fix possible interrupt loss when using MSI Jason Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Jason Wang @ 2022-07-26  8:50 UTC (permalink / raw)
  To: qemu-devel, peter.maydell

The following changes since commit 5288bee45fbd33203b61f8c76e41b15bb5913e6e:

  Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-07-21 11:13:01 +0100)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to 75a8ce64f6e37513698857fb4284170da163ed06:

  vdpa: Fix memory listener deletions of iova tree (2022-07-26 16:24:19 +0800)

----------------------------------------------------------------

----------------------------------------------------------------
Ake Koomsin (1):
      e1000e: Fix possible interrupt loss when using MSI

Eugenio Pérez (2):
      vhost: Get vring base from vq, not svq
      vdpa: Fix memory listener deletions of iova tree

 hw/net/e1000e_core.c   |  2 ++
 hw/virtio/vhost-vdpa.c | 26 +++++++++++++-------------
 2 files changed, 15 insertions(+), 13 deletions(-)



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PULL 1/3] e1000e: Fix possible interrupt loss when using MSI
  2022-07-26  8:50 [PULL 0/3] Net patches Jason Wang
@ 2022-07-26  8:50 ` Jason Wang
  2022-07-26  8:50 ` [PULL 2/3] vhost: Get vring base from vq, not svq Jason Wang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2022-07-26  8:50 UTC (permalink / raw)
  To: qemu-devel, peter.maydell; +Cc: Ake Koomsin, Jason Wang

From: Ake Koomsin <ake@igel.co.jp>

Commit "e1000e: Prevent MSI/MSI-X storms" introduced msi_causes_pending
to prevent interrupt storms problem. It was tested with MSI-X.

In case of MSI, the guest can rely solely on interrupts to clear ICR.
Upon clearing all pending interrupts, msi_causes_pending gets cleared.
However, when e1000e_itr_should_postpone() in e1000e_send_msi() returns
true, MSI never gets fired by e1000e_intrmgr_on_throttling_timer()
because msi_causes_pending is still set. This results in interrupt loss.

To prevent this, we need to clear msi_causes_pending when MSI is going
to get fired by the throttling timer. The guest can then receive
interrupts eventually.

Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/e1000e_core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hw/net/e1000e_core.c b/hw/net/e1000e_core.c
index 2c51089..208e3e0 100644
--- a/hw/net/e1000e_core.c
+++ b/hw/net/e1000e_core.c
@@ -159,6 +159,8 @@ e1000e_intrmgr_on_throttling_timer(void *opaque)
 
     if (msi_enabled(timer->core->owner)) {
         trace_e1000e_irq_msi_notify_postponed();
+        /* Clear msi_causes_pending to fire MSI eventually */
+        timer->core->msi_causes_pending = 0;
         e1000e_set_interrupt_cause(timer->core, 0);
     } else {
         trace_e1000e_irq_legacy_notify_postponed();
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PULL 2/3] vhost: Get vring base from vq, not svq
  2022-07-26  8:50 [PULL 0/3] Net patches Jason Wang
  2022-07-26  8:50 ` [PULL 1/3] e1000e: Fix possible interrupt loss when using MSI Jason Wang
@ 2022-07-26  8:50 ` Jason Wang
  2022-07-26  8:50 ` [PULL 3/3] vdpa: Fix memory listener deletions of iova tree Jason Wang
  2022-07-26 12:28 ` [PULL 0/3] Net patches Peter Maydell
  3 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2022-07-26  8:50 UTC (permalink / raw)
  To: qemu-devel, peter.maydell; +Cc: Eugenio Pérez, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

The SVQ vring used idx usually match with the guest visible one, as long
as all the guest buffers (GPA) maps to exactly one buffer within qemu's
VA. However, as we can see in virtqueue_map_desc, a single guest buffer
could map to many buffers in SVQ vring.

Also, its also a mistake to rewind them at the source of migration.
Since VirtQueue is able to migrate the inflight descriptors, its
responsability of the destination to perform the rewind just in case it
cannot report the inflight descriptors to the device.

This makes easier to migrate between backends or to recover them in
vhost devices that support set in flight descriptors.

Fixes: 6d0b22266633 ("vdpa: Adapt vhost_vdpa_get_vring_base to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 291cd19..bce64f4 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1179,7 +1179,18 @@ static int vhost_vdpa_set_vring_base(struct vhost_dev *dev,
                                        struct vhost_vring_state *ring)
 {
     struct vhost_vdpa *v = dev->opaque;
+    VirtQueue *vq = virtio_get_queue(dev->vdev, ring->index);
 
+    /*
+     * vhost-vdpa devices does not support in-flight requests. Set all of them
+     * as available.
+     *
+     * TODO: This is ok for networking, but other kinds of devices might
+     * have problems with these retransmissions.
+     */
+    while (virtqueue_rewind(vq, 1)) {
+        continue;
+    }
     if (v->shadow_vqs_enabled) {
         /*
          * Device vring base was set at device start. SVQ base is handled by
@@ -1195,21 +1206,10 @@ static int vhost_vdpa_get_vring_base(struct vhost_dev *dev,
                                        struct vhost_vring_state *ring)
 {
     struct vhost_vdpa *v = dev->opaque;
-    int vdpa_idx = ring->index - dev->vq_index;
     int ret;
 
     if (v->shadow_vqs_enabled) {
-        VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, vdpa_idx);
-
-        /*
-         * Setting base as last used idx, so destination will see as available
-         * all the entries that the device did not use, including the in-flight
-         * processing ones.
-         *
-         * TODO: This is ok for networking, but other kinds of devices might
-         * have problems with these retransmissions.
-         */
-        ring->num = svq->last_used_idx;
+        ring->num = virtio_queue_get_last_avail_idx(dev->vdev, ring->index);
         return 0;
     }
 
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PULL 3/3] vdpa: Fix memory listener deletions of iova tree
  2022-07-26  8:50 [PULL 0/3] Net patches Jason Wang
  2022-07-26  8:50 ` [PULL 1/3] e1000e: Fix possible interrupt loss when using MSI Jason Wang
  2022-07-26  8:50 ` [PULL 2/3] vhost: Get vring base from vq, not svq Jason Wang
@ 2022-07-26  8:50 ` Jason Wang
  2022-07-28  6:14   ` Lei Yang
  2022-07-26 12:28 ` [PULL 0/3] Net patches Peter Maydell
  3 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2022-07-26  8:50 UTC (permalink / raw)
  To: qemu-devel, peter.maydell; +Cc: Eugenio Pérez, Lei Yang, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

vhost_vdpa_listener_region_del is always deleting the first iova entry
of the tree, since it's using the needle iova instead of the result's
one.

This was detected using a vga virtual device in the VM using vdpa SVQ.
It makes some extra memory adding and deleting, so the wrong one was
mapped / unmapped. This was undetected before since all the memory was
mappend and unmapped totally without that device, but other conditions
could trigger it too:

* mem_region was with .iova = 0, .translated_addr = (correct GPA).
* iova_tree_find_iova returned right result, but does not update
  mem_region.
* iova_tree_remove always removed region with .iova = 0. Right iova were
  sent to the device.
* Next map will fill the first region with .iova = 0, causing a mapping
  with the same iova and device complains, if the next action is a map.
* Next unmap will cause to try to unmap again iova = 0, causing the
  device to complain that no region was mapped at iova = 0.

Fixes: 34e3c94edaef ("vdpa: Add custom IOTLB translations to SVQ")
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index bce64f4..3ff9ce3 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -290,7 +290,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
 
         result = vhost_iova_tree_find_iova(v->iova_tree, &mem_region);
         iova = result->iova;
-        vhost_iova_tree_remove(v->iova_tree, &mem_region);
+        vhost_iova_tree_remove(v->iova_tree, result);
     }
     vhost_vdpa_iotlb_batch_begin_once(v);
     ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PULL 0/3] Net patches
  2022-07-26  8:50 [PULL 0/3] Net patches Jason Wang
                   ` (2 preceding siblings ...)
  2022-07-26  8:50 ` [PULL 3/3] vdpa: Fix memory listener deletions of iova tree Jason Wang
@ 2022-07-26 12:28 ` Peter Maydell
  3 siblings, 0 replies; 14+ messages in thread
From: Peter Maydell @ 2022-07-26 12:28 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel

On Tue, 26 Jul 2022 at 09:51, Jason Wang <jasowang@redhat.com> wrote:
>
> The following changes since commit 5288bee45fbd33203b61f8c76e41b15bb5913e6e:
>
>   Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-07-21 11:13:01 +0100)
>
> are available in the git repository at:
>
>   https://github.com/jasowang/qemu.git tags/net-pull-request
>
> for you to fetch changes up to 75a8ce64f6e37513698857fb4284170da163ed06:
>
>   vdpa: Fix memory listener deletions of iova tree (2022-07-26 16:24:19 +0800)
>
> ----------------------------------------------------------------
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/7.1
for any user-visible changes.

-- PMM


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PULL 3/3] vdpa: Fix memory listener deletions of iova tree
  2022-07-26  8:50 ` [PULL 3/3] vdpa: Fix memory listener deletions of iova tree Jason Wang
@ 2022-07-28  6:14   ` Lei Yang
  2022-07-28  6:26     ` Jason Wang
  0 siblings, 1 reply; 14+ messages in thread
From: Lei Yang @ 2022-07-28  6:14 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel, peter.maydell, Eugenio Pérez

[-- Attachment #1: Type: text/plain, Size: 6315 bytes --]

I tried to manually changed this line then test this branch on local host.
After the migration is successful, the qemu core dump occurs on the
shutdown inside guest.

Compiled qemu Steps:
# git clone https://gitlab.com/eperezmartin/qemu-kvm.git
# cd qemu-kvm/
# mkdir build
# cd build/
# git checkout bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9
# vim /root/qemu-kvm/hw/virtio/vhost-vdpa.c
(Chanege "vhost_iova_tree_remove(v->iova_tree, &mem_region);" to
"vhost_iova_tree_remove(v->iova_tree, result);")
# ../configure --target-list=x86_64-softmmu --enable-debug
# make

Core dump messages:
# gdb /root/qemu-kvm/build/x86_64-softmmu/qemu-system-x86_64
core.qemu-system-x86.7419
(gdb) bt full
#0  0x000056107c19afa9 in vhost_vdpa_listener_region_del
(listener=0x7ff9a9c691a0, section=0x7ffd3889ad20)
    at ../hw/virtio/vhost-vdpa.c:290
        result = 0x0
        vaddr = 0x7ff29be00000
        mem_region = {iova = 0, translated_addr = 140679973961728, size =
30064771071, perm = IOMMU_NONE}
        v = 0x7ff9a9c69190
        iova = 4294967296
        llend = 34359738368
        llsize = 30064771072
        ret = 32765
        __func__ = "vhost_vdpa_listener_region_del"
#1  0x000056107c1ca915 in listener_del_address_space
(listener=0x7ff9a9c691a0, as=0x56107cccbc00 <address_space_memory>)
    at ../softmmu/memory.c:2939
        section =
          {size = 30064771072, mr = 0x56107e116270, fv = 0x7ff1e02a4090,
offset_within_region = 2147483648, offset_within_address_space =
4294967296, readonly = false, nonvolatile = false}
        view = 0x7ff1e02a4090
        fr = 0x7ff1e04027f0
#2  0x000056107c1cac39 in memory_listener_unregister
(listener=0x7ff9a9c691a0) at ../softmmu/memory.c:2989
#3  0x000056107c19d007 in vhost_vdpa_dev_start (dev=0x56107e126ea0,
started=false) at ../hw/virtio/vhost-vdpa.c:1134
        v = 0x7ff9a9c69190
        ok = true
#4  0x000056107c190252 in vhost_dev_stop (hdev=0x56107e126ea0,
vdev=0x56107f40cb50) at ../hw/virtio/vhost.c:1828
        i = 32761
        __PRETTY_FUNCTION__ = "vhost_dev_stop"
#5  0x000056107bebe26c in vhost_net_stop_one (net=0x56107e126ea0,
dev=0x56107f40cb50) at ../hw/net/vhost_net.c:315
        file = {index = 0, fd = -1}
        __PRETTY_FUNCTION__ = "vhost_net_stop_one"
#6  0x000056107bebe6bf in vhost_net_stop (dev=0x56107f40cb50,
ncs=0x56107f421850, data_queue_pairs=1, cvq=0)
    at ../hw/net/vhost_net.c:425
        qbus = 0x56107f40cac8
        vbus = 0x56107f40cac8
        k = 0x56107df1a220
        n = 0x56107f40cb50
        peer = 0x7ff9a9c69010
        total_notifiers = 2
        nvhosts = 1
        i = 0
--Type <RET> for more, q to quit, c to continue without paging--
        r = 32765
        __PRETTY_FUNCTION__ = "vhost_net_stop"
#7  0x000056107c14af24 in virtio_net_vhost_status (n=0x56107f40cb50,
status=15 '\017') at ../hw/net/virtio-net.c:298
        vdev = 0x56107f40cb50
        nc = 0x56107f421850
        queue_pairs = 1
        cvq = 0
#8  0x000056107c14b17e in virtio_net_set_status (vdev=0x56107f40cb50,
status=15 '\017') at ../hw/net/virtio-net.c:372
        n = 0x56107f40cb50
        q = 0x56107f40cb50
        i = 32765
        queue_status = 137 '\211'
#9  0x000056107c185af2 in virtio_set_status (vdev=0x56107f40cb50, val=15
'\017') at ../hw/virtio/virtio.c:1947
        k = 0x56107dfe2c60
#10 0x000056107c188cbb in virtio_vmstate_change (opaque=0x56107f40cb50,
running=false, state=RUN_STATE_SHUTDOWN)
    at ../hw/virtio/virtio.c:3195
        vdev = 0x56107f40cb50
        qbus = 0x56107f40cac8
        k = 0x56107df1a220
        backend_run = false
#11 0x000056107bfdca5e in vm_state_notify (running=false,
state=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:334
        e = 0x56107f419950
        next = 0x56107f224b80
#12 0x000056107bfd43e6 in do_vm_stop (state=RUN_STATE_SHUTDOWN,
send_stop=false) at ../softmmu/cpus.c:263
        ret = 0
#13 0x000056107bfd4420 in vm_shutdown () at ../softmmu/cpus.c:281
#14 0x000056107bfdd584 in qemu_cleanup () at ../softmmu/runstate.c:813
#15 0x000056107bd81a5b in main (argc=65, argv=0x7ffd3889b178,
envp=0x7ffd3889b388) at ../softmmu/main.c:51


Thanks
Lei

Jason Wang <jasowang@redhat.com> 于2022年7月26日周二 16:51写道:

> From: Eugenio Pérez <eperezma@redhat.com>
>
> vhost_vdpa_listener_region_del is always deleting the first iova entry
> of the tree, since it's using the needle iova instead of the result's
> one.
>
> This was detected using a vga virtual device in the VM using vdpa SVQ.
> It makes some extra memory adding and deleting, so the wrong one was
> mapped / unmapped. This was undetected before since all the memory was
> mappend and unmapped totally without that device, but other conditions
> could trigger it too:
>
> * mem_region was with .iova = 0, .translated_addr = (correct GPA).
> * iova_tree_find_iova returned right result, but does not update
>   mem_region.
> * iova_tree_remove always removed region with .iova = 0. Right iova were
>   sent to the device.
> * Next map will fill the first region with .iova = 0, causing a mapping
>   with the same iova and device complains, if the next action is a map.
> * Next unmap will cause to try to unmap again iova = 0, causing the
>   device to complain that no region was mapped at iova = 0.
>
> Fixes: 34e3c94edaef ("vdpa: Add custom IOTLB translations to SVQ")
> Reported-by: Lei Yang <leiyang@redhat.com>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  hw/virtio/vhost-vdpa.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index bce64f4..3ff9ce3 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -290,7 +290,7 @@ static void
> vhost_vdpa_listener_region_del(MemoryListener *listener,
>
>          result = vhost_iova_tree_find_iova(v->iova_tree, &mem_region);
>          iova = result->iova;
> -        vhost_iova_tree_remove(v->iova_tree, &mem_region);
> +        vhost_iova_tree_remove(v->iova_tree, result);
>      }
>      vhost_vdpa_iotlb_batch_begin_once(v);
>      ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
> --
> 2.7.4
>
>

[-- Attachment #2: Type: text/html, Size: 9377 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PULL 3/3] vdpa: Fix memory listener deletions of iova tree
  2022-07-28  6:14   ` Lei Yang
@ 2022-07-28  6:26     ` Jason Wang
  2022-07-28  8:29       ` Lei Yang
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2022-07-28  6:26 UTC (permalink / raw)
  To: Lei Yang; +Cc: qemu-devel, Peter Maydell, Eugenio Pérez

On Thu, Jul 28, 2022 at 2:14 PM Lei Yang <leiyang@redhat.com> wrote:
>
> I tried to manually changed this line then test this branch on local host. After the migration is successful, the qemu core dump occurs on the shutdown inside guest.
>
> Compiled qemu Steps:
> # git clone https://gitlab.com/eperezmartin/qemu-kvm.git
> # cd qemu-kvm/
> # mkdir build
> # cd build/
> # git checkout bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9

I got this

fatal: reference is not a tree: bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9

and my HEAD is:

commit 7b17a1a841fc2336eba53afade9cadb14bd3dd9a (HEAD -> master, tag:
v7.1.0-rc0, origin/master, origin/HEAD)
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue Jul 26 18:03:16 2022 -0700

    Update version for v7.1.0-rc0 release

    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

> # vim /root/qemu-kvm/hw/virtio/vhost-vdpa.c
> (Chanege "vhost_iova_tree_remove(v->iova_tree, &mem_region);" to "vhost_iova_tree_remove(v->iova_tree, result);")

Any reason you need to manually change the line since it has been merged?

> # ../configure --target-list=x86_64-softmmu --enable-debug
> # make

So if I understand you correctly, you meant the issue is not fixed?

Thanks

>
> Core dump messages:
> # gdb /root/qemu-kvm/build/x86_64-softmmu/qemu-system-x86_64 core.qemu-system-x86.7419
> (gdb) bt full
> #0  0x000056107c19afa9 in vhost_vdpa_listener_region_del (listener=0x7ff9a9c691a0, section=0x7ffd3889ad20)
>     at ../hw/virtio/vhost-vdpa.c:290
>         result = 0x0
>         vaddr = 0x7ff29be00000
>         mem_region = {iova = 0, translated_addr = 140679973961728, size = 30064771071, perm = IOMMU_NONE}
>         v = 0x7ff9a9c69190
>         iova = 4294967296
>         llend = 34359738368
>         llsize = 30064771072
>         ret = 32765
>         __func__ = "vhost_vdpa_listener_region_del"
> #1  0x000056107c1ca915 in listener_del_address_space (listener=0x7ff9a9c691a0, as=0x56107cccbc00 <address_space_memory>)
>     at ../softmmu/memory.c:2939
>         section =
>           {size = 30064771072, mr = 0x56107e116270, fv = 0x7ff1e02a4090, offset_within_region = 2147483648, offset_within_address_space = 4294967296, readonly = false, nonvolatile = false}
>         view = 0x7ff1e02a4090
>         fr = 0x7ff1e04027f0
> #2  0x000056107c1cac39 in memory_listener_unregister (listener=0x7ff9a9c691a0) at ../softmmu/memory.c:2989
> #3  0x000056107c19d007 in vhost_vdpa_dev_start (dev=0x56107e126ea0, started=false) at ../hw/virtio/vhost-vdpa.c:1134
>         v = 0x7ff9a9c69190
>         ok = true
> #4  0x000056107c190252 in vhost_dev_stop (hdev=0x56107e126ea0, vdev=0x56107f40cb50) at ../hw/virtio/vhost.c:1828
>         i = 32761
>         __PRETTY_FUNCTION__ = "vhost_dev_stop"
> #5  0x000056107bebe26c in vhost_net_stop_one (net=0x56107e126ea0, dev=0x56107f40cb50) at ../hw/net/vhost_net.c:315
>         file = {index = 0, fd = -1}
>         __PRETTY_FUNCTION__ = "vhost_net_stop_one"
> #6  0x000056107bebe6bf in vhost_net_stop (dev=0x56107f40cb50, ncs=0x56107f421850, data_queue_pairs=1, cvq=0)
>     at ../hw/net/vhost_net.c:425
>         qbus = 0x56107f40cac8
>         vbus = 0x56107f40cac8
>         k = 0x56107df1a220
>         n = 0x56107f40cb50
>         peer = 0x7ff9a9c69010
>         total_notifiers = 2
>         nvhosts = 1
>         i = 0
> --Type <RET> for more, q to quit, c to continue without paging--
>         r = 32765
>         __PRETTY_FUNCTION__ = "vhost_net_stop"
> #7  0x000056107c14af24 in virtio_net_vhost_status (n=0x56107f40cb50, status=15 '\017') at ../hw/net/virtio-net.c:298
>         vdev = 0x56107f40cb50
>         nc = 0x56107f421850
>         queue_pairs = 1
>         cvq = 0
> #8  0x000056107c14b17e in virtio_net_set_status (vdev=0x56107f40cb50, status=15 '\017') at ../hw/net/virtio-net.c:372
>         n = 0x56107f40cb50
>         q = 0x56107f40cb50
>         i = 32765
>         queue_status = 137 '\211'
> #9  0x000056107c185af2 in virtio_set_status (vdev=0x56107f40cb50, val=15 '\017') at ../hw/virtio/virtio.c:1947
>         k = 0x56107dfe2c60
> #10 0x000056107c188cbb in virtio_vmstate_change (opaque=0x56107f40cb50, running=false, state=RUN_STATE_SHUTDOWN)
>     at ../hw/virtio/virtio.c:3195
>         vdev = 0x56107f40cb50
>         qbus = 0x56107f40cac8
>         k = 0x56107df1a220
>         backend_run = false
> #11 0x000056107bfdca5e in vm_state_notify (running=false, state=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:334
>         e = 0x56107f419950
>         next = 0x56107f224b80
> #12 0x000056107bfd43e6 in do_vm_stop (state=RUN_STATE_SHUTDOWN, send_stop=false) at ../softmmu/cpus.c:263
>         ret = 0
> #13 0x000056107bfd4420 in vm_shutdown () at ../softmmu/cpus.c:281
> #14 0x000056107bfdd584 in qemu_cleanup () at ../softmmu/runstate.c:813
> #15 0x000056107bd81a5b in main (argc=65, argv=0x7ffd3889b178, envp=0x7ffd3889b388) at ../softmmu/main.c:51
>
>
> Thanks
> Lei
>
> Jason Wang <jasowang@redhat.com> 于2022年7月26日周二 16:51写道:
>>
>> From: Eugenio Pérez <eperezma@redhat.com>
>>
>> vhost_vdpa_listener_region_del is always deleting the first iova entry
>> of the tree, since it's using the needle iova instead of the result's
>> one.
>>
>> This was detected using a vga virtual device in the VM using vdpa SVQ.
>> It makes some extra memory adding and deleting, so the wrong one was
>> mapped / unmapped. This was undetected before since all the memory was
>> mappend and unmapped totally without that device, but other conditions
>> could trigger it too:
>>
>> * mem_region was with .iova = 0, .translated_addr = (correct GPA).
>> * iova_tree_find_iova returned right result, but does not update
>>   mem_region.
>> * iova_tree_remove always removed region with .iova = 0. Right iova were
>>   sent to the device.
>> * Next map will fill the first region with .iova = 0, causing a mapping
>>   with the same iova and device complains, if the next action is a map.
>> * Next unmap will cause to try to unmap again iova = 0, causing the
>>   device to complain that no region was mapped at iova = 0.
>>
>> Fixes: 34e3c94edaef ("vdpa: Add custom IOTLB translations to SVQ")
>> Reported-by: Lei Yang <leiyang@redhat.com>
>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>  hw/virtio/vhost-vdpa.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
>> index bce64f4..3ff9ce3 100644
>> --- a/hw/virtio/vhost-vdpa.c
>> +++ b/hw/virtio/vhost-vdpa.c
>> @@ -290,7 +290,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>>
>>          result = vhost_iova_tree_find_iova(v->iova_tree, &mem_region);
>>          iova = result->iova;
>> -        vhost_iova_tree_remove(v->iova_tree, &mem_region);
>> +        vhost_iova_tree_remove(v->iova_tree, result);
>>      }
>>      vhost_vdpa_iotlb_batch_begin_once(v);
>>      ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
>> --
>> 2.7.4
>>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PULL 3/3] vdpa: Fix memory listener deletions of iova tree
  2022-07-28  6:26     ` Jason Wang
@ 2022-07-28  8:29       ` Lei Yang
  0 siblings, 0 replies; 14+ messages in thread
From: Lei Yang @ 2022-07-28  8:29 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel, Peter Maydell, Eugenio Pérez

[-- Attachment #1: Type: text/plain, Size: 8502 bytes --]

Jason Wang <jasowang@redhat.com> 于2022年7月28日周四 14:27写道:

> On Thu, Jul 28, 2022 at 2:14 PM Lei Yang <leiyang@redhat.com> wrote:
> >
> > I tried to manually changed this line then test this branch on local
> host. After the migration is successful, the qemu core dump occurs on the
> shutdown inside guest.
> >
> > Compiled qemu Steps:
> > # git clone https://gitlab.com/eperezmartin/qemu-kvm.git
> > # cd qemu-kvm/
> > # mkdir build
> > # cd build/
> > # git checkout bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9
>
> I got this
>
> fatal: reference is not a tree: bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9
>
> and my HEAD is:
>
> commit 7b17a1a841fc2336eba53afade9cadb14bd3dd9a (HEAD -> master, tag:
> v7.1.0-rc0, origin/master, origin/HEAD)
> Author: Richard Henderson <richard.henderson@linaro.org>
> Date:   Tue Jul 26 18:03:16 2022 -0700
>
>     Update version for v7.1.0-rc0 release
>
>     Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>

I tried to recompile it use you mentioned commit, but the problem is
reproduced again:
# git clone git://git.qemu.org/qemu.git
# cd qemu/
# git log
# mkdir build
# cd build/
# vim /root/qemu/hw/virtio/vhost-vdpa.c
# ../configure --target-list=x86_64-softmmu --enable-debug
# make

Latest commit:
commit 7b17a1a841fc2336eba53afade9cadb14bd3dd9a (HEAD -> master, tag:
v7.1.0-rc0, origin/master, origin/HEAD)
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue Jul 26 18:03:16 2022 -0700

    Update version for v7.1.0-rc0 release

    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

>
> > # vim /root/qemu-kvm/hw/virtio/vhost-vdpa.c
> > (Chanege "vhost_iova_tree_remove(v->iova_tree, &mem_region);" to
> "vhost_iova_tree_remove(v->iova_tree, result);")
>
> Any reason you need to manually change the line since it has been merged?
>
> > # ../configure --target-list=x86_64-softmmu --enable-debug
> > # make
>
> So if I understand you correctly, you meant the issue is not fixed?
>

From my side, this is a new issue. Because the guest can boot up normally
and complete the migration. It is just that after the migration is
successful, after shutdown in the guest, a core dump occurs

Thanks

>
> Thanks
>
> >
> > Core dump messages:
> > # gdb /root/qemu-kvm/build/x86_64-softmmu/qemu-system-x86_64
> core.qemu-system-x86.7419
> > (gdb) bt full
> > #0  0x000056107c19afa9 in vhost_vdpa_listener_region_del
> (listener=0x7ff9a9c691a0, section=0x7ffd3889ad20)
> >     at ../hw/virtio/vhost-vdpa.c:290
> >         result = 0x0
> >         vaddr = 0x7ff29be00000
> >         mem_region = {iova = 0, translated_addr = 140679973961728, size
> = 30064771071, perm = IOMMU_NONE}
> >         v = 0x7ff9a9c69190
> >         iova = 4294967296
> >         llend = 34359738368
> >         llsize = 30064771072
> >         ret = 32765
> >         __func__ = "vhost_vdpa_listener_region_del"
> > #1  0x000056107c1ca915 in listener_del_address_space
> (listener=0x7ff9a9c691a0, as=0x56107cccbc00 <address_space_memory>)
> >     at ../softmmu/memory.c:2939
> >         section =
> >           {size = 30064771072, mr = 0x56107e116270, fv = 0x7ff1e02a4090,
> offset_within_region = 2147483648, offset_within_address_space =
> 4294967296, readonly = false, nonvolatile = false}
> >         view = 0x7ff1e02a4090
> >         fr = 0x7ff1e04027f0
> > #2  0x000056107c1cac39 in memory_listener_unregister
> (listener=0x7ff9a9c691a0) at ../softmmu/memory.c:2989
> > #3  0x000056107c19d007 in vhost_vdpa_dev_start (dev=0x56107e126ea0,
> started=false) at ../hw/virtio/vhost-vdpa.c:1134
> >         v = 0x7ff9a9c69190
> >         ok = true
> > #4  0x000056107c190252 in vhost_dev_stop (hdev=0x56107e126ea0,
> vdev=0x56107f40cb50) at ../hw/virtio/vhost.c:1828
> >         i = 32761
> >         __PRETTY_FUNCTION__ = "vhost_dev_stop"
> > #5  0x000056107bebe26c in vhost_net_stop_one (net=0x56107e126ea0,
> dev=0x56107f40cb50) at ../hw/net/vhost_net.c:315
> >         file = {index = 0, fd = -1}
> >         __PRETTY_FUNCTION__ = "vhost_net_stop_one"
> > #6  0x000056107bebe6bf in vhost_net_stop (dev=0x56107f40cb50,
> ncs=0x56107f421850, data_queue_pairs=1, cvq=0)
> >     at ../hw/net/vhost_net.c:425
> >         qbus = 0x56107f40cac8
> >         vbus = 0x56107f40cac8
> >         k = 0x56107df1a220
> >         n = 0x56107f40cb50
> >         peer = 0x7ff9a9c69010
> >         total_notifiers = 2
> >         nvhosts = 1
> >         i = 0
> > --Type <RET> for more, q to quit, c to continue without paging--
> >         r = 32765
> >         __PRETTY_FUNCTION__ = "vhost_net_stop"
> > #7  0x000056107c14af24 in virtio_net_vhost_status (n=0x56107f40cb50,
> status=15 '\017') at ../hw/net/virtio-net.c:298
> >         vdev = 0x56107f40cb50
> >         nc = 0x56107f421850
> >         queue_pairs = 1
> >         cvq = 0
> > #8  0x000056107c14b17e in virtio_net_set_status (vdev=0x56107f40cb50,
> status=15 '\017') at ../hw/net/virtio-net.c:372
> >         n = 0x56107f40cb50
> >         q = 0x56107f40cb50
> >         i = 32765
> >         queue_status = 137 '\211'
> > #9  0x000056107c185af2 in virtio_set_status (vdev=0x56107f40cb50, val=15
> '\017') at ../hw/virtio/virtio.c:1947
> >         k = 0x56107dfe2c60
> > #10 0x000056107c188cbb in virtio_vmstate_change (opaque=0x56107f40cb50,
> running=false, state=RUN_STATE_SHUTDOWN)
> >     at ../hw/virtio/virtio.c:3195
> >         vdev = 0x56107f40cb50
> >         qbus = 0x56107f40cac8
> >         k = 0x56107df1a220
> >         backend_run = false
> > #11 0x000056107bfdca5e in vm_state_notify (running=false,
> state=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:334
> >         e = 0x56107f419950
> >         next = 0x56107f224b80
> > #12 0x000056107bfd43e6 in do_vm_stop (state=RUN_STATE_SHUTDOWN,
> send_stop=false) at ../softmmu/cpus.c:263
> >         ret = 0
> > #13 0x000056107bfd4420 in vm_shutdown () at ../softmmu/cpus.c:281
> > #14 0x000056107bfdd584 in qemu_cleanup () at ../softmmu/runstate.c:813
> > #15 0x000056107bd81a5b in main (argc=65, argv=0x7ffd3889b178,
> envp=0x7ffd3889b388) at ../softmmu/main.c:51
> >
> >
> > Thanks
> > Lei
> >
> > Jason Wang <jasowang@redhat.com> 于2022年7月26日周二 16:51写道:
> >>
> >> From: Eugenio Pérez <eperezma@redhat.com>
> >>
> >> vhost_vdpa_listener_region_del is always deleting the first iova entry
> >> of the tree, since it's using the needle iova instead of the result's
> >> one.
> >>
> >> This was detected using a vga virtual device in the VM using vdpa SVQ.
> >> It makes some extra memory adding and deleting, so the wrong one was
> >> mapped / unmapped. This was undetected before since all the memory was
> >> mappend and unmapped totally without that device, but other conditions
> >> could trigger it too:
> >>
> >> * mem_region was with .iova = 0, .translated_addr = (correct GPA).
> >> * iova_tree_find_iova returned right result, but does not update
> >>   mem_region.
> >> * iova_tree_remove always removed region with .iova = 0. Right iova were
> >>   sent to the device.
> >> * Next map will fill the first region with .iova = 0, causing a mapping
> >>   with the same iova and device complains, if the next action is a map.
> >> * Next unmap will cause to try to unmap again iova = 0, causing the
> >>   device to complain that no region was mapped at iova = 0.
> >>
> >> Fixes: 34e3c94edaef ("vdpa: Add custom IOTLB translations to SVQ")
> >> Reported-by: Lei Yang <leiyang@redhat.com>
> >> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> >> ---
> >>  hw/virtio/vhost-vdpa.c | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> >> index bce64f4..3ff9ce3 100644
> >> --- a/hw/virtio/vhost-vdpa.c
> >> +++ b/hw/virtio/vhost-vdpa.c
> >> @@ -290,7 +290,7 @@ static void
> vhost_vdpa_listener_region_del(MemoryListener *listener,
> >>
> >>          result = vhost_iova_tree_find_iova(v->iova_tree, &mem_region);
> >>          iova = result->iova;
> >> -        vhost_iova_tree_remove(v->iova_tree, &mem_region);
> >> +        vhost_iova_tree_remove(v->iova_tree, result);
> >>      }
> >>      vhost_vdpa_iotlb_batch_begin_once(v);
> >>      ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
> >> --
> >> 2.7.4
> >>
>
>

[-- Attachment #2: Type: text/html, Size: 11249 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PULL 0/3] Net patches
  2023-11-21  9:57 Jason Wang
@ 2023-11-21 15:12 ` Stefan Hajnoczi
  0 siblings, 0 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2023-11-21 15:12 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 115 bytes --]

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/8.2 for any user-visible changes.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PULL 0/3] Net patches
@ 2023-11-21  9:57 Jason Wang
  2023-11-21 15:12 ` Stefan Hajnoczi
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2023-11-21  9:57 UTC (permalink / raw)
  To: qemu-devel

The following changes since commit af9264da80073435fd78944bc5a46e695897d7e5:

  Merge tag '20231119-xtensa-1' of https://github.com/OSLL/qemu-xtensa into staging (2023-11-20 05:25:19 -0500)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to 84f85eb95f14add02efd5e69f2ff7783d79b24f7:

  net: do not delete nics in net_cleanup() (2023-11-21 15:42:34 +0800)

----------------------------------------------------------------

----------------------------------------------------------------
Akihiko Odaki (2):
      net: Provide MemReentrancyGuard * to qemu_new_nic()
      net: Update MemReentrancyGuard for NIC

David Woodhouse (1):
      net: do not delete nics in net_cleanup()

 hw/net/allwinner-sun8i-emac.c |  3 ++-
 hw/net/allwinner_emac.c       |  3 ++-
 hw/net/cadence_gem.c          |  3 ++-
 hw/net/dp8393x.c              |  3 ++-
 hw/net/e1000.c                |  3 ++-
 hw/net/e1000e.c               |  2 +-
 hw/net/eepro100.c             |  4 +++-
 hw/net/etraxfs_eth.c          |  3 ++-
 hw/net/fsl_etsec/etsec.c      |  3 ++-
 hw/net/ftgmac100.c            |  3 ++-
 hw/net/i82596.c               |  2 +-
 hw/net/igb.c                  |  2 +-
 hw/net/imx_fec.c              |  2 +-
 hw/net/lan9118.c              |  3 ++-
 hw/net/mcf_fec.c              |  3 ++-
 hw/net/mipsnet.c              |  3 ++-
 hw/net/msf2-emac.c            |  3 ++-
 hw/net/mv88w8618_eth.c        |  3 ++-
 hw/net/ne2000-isa.c           |  3 ++-
 hw/net/ne2000-pci.c           |  3 ++-
 hw/net/npcm7xx_emc.c          |  3 ++-
 hw/net/opencores_eth.c        |  3 ++-
 hw/net/pcnet.c                |  3 ++-
 hw/net/rocker/rocker_fp.c     |  4 ++--
 hw/net/rtl8139.c              |  3 ++-
 hw/net/smc91c111.c            |  3 ++-
 hw/net/spapr_llan.c           |  3 ++-
 hw/net/stellaris_enet.c       |  3 ++-
 hw/net/sungem.c               |  2 +-
 hw/net/sunhme.c               |  3 ++-
 hw/net/tulip.c                |  3 ++-
 hw/net/virtio-net.c           |  6 ++++--
 hw/net/vmxnet3.c              |  2 +-
 hw/net/xen_nic.c              |  3 ++-
 hw/net/xgmac.c                |  3 ++-
 hw/net/xilinx_axienet.c       |  3 ++-
 hw/net/xilinx_ethlite.c       |  3 ++-
 hw/usb/dev-network.c          |  3 ++-
 include/net/net.h             |  2 ++
 net/net.c                     | 43 +++++++++++++++++++++++++++++++++++++------
 40 files changed, 112 insertions(+), 46 deletions(-)




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PULL 0/3] Net patches
  2021-11-19  4:03 Jason Wang
@ 2021-11-19 10:01 ` Richard Henderson
  0 siblings, 0 replies; 14+ messages in thread
From: Richard Henderson @ 2021-11-19 10:01 UTC (permalink / raw)
  To: Jason Wang, qemu-devel, peter.maydell

On 11/19/21 5:03 AM, Jason Wang wrote:
> The following changes since commit 44a3aa0608f01274418487b655d42467c1d8334e:
> 
>    Merge tag 'sev-hashes-pull-request' of https://gitlab.com/berrange/qemu into staging (2021-11-18 15:06:05 +0100)
> 
> are available in the git repository at:
> 
>    https://github.com/jasowang/qemu.git tags/net-pull-request
> 
> for you to fetch changes up to 0656fbc7ddccdade1709742a9b56ae07dd3c280a:
> 
>    net/colo-compare.c: Fix incorrect return when input wrong size (2021-11-19 11:44:22 +0800)
> 
> ----------------------------------------------------------------
> 
> ----------------------------------------------------------------
> Prasad J Pandit (1):
>        net: vmxnet3: validate configuration values during activate (CVE-2021-20203)
> 
> Zhang Chen (2):
>        net/colo-compare.c: Fix ACK track reverse issue
>        net/colo-compare.c: Fix incorrect return when input wrong size
> 
>   hw/net/vmxnet3.c   | 13 +++++++++++++
>   net/colo-compare.c |  8 +++++---
>   2 files changed, 18 insertions(+), 3 deletions(-)

Applied, thanks.

r~


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PULL 0/3] Net patches
@ 2021-11-19  4:03 Jason Wang
  2021-11-19 10:01 ` Richard Henderson
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2021-11-19  4:03 UTC (permalink / raw)
  To: qemu-devel, peter.maydell; +Cc: Jason Wang

The following changes since commit 44a3aa0608f01274418487b655d42467c1d8334e:

  Merge tag 'sev-hashes-pull-request' of https://gitlab.com/berrange/qemu into staging (2021-11-18 15:06:05 +0100)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to 0656fbc7ddccdade1709742a9b56ae07dd3c280a:

  net/colo-compare.c: Fix incorrect return when input wrong size (2021-11-19 11:44:22 +0800)

----------------------------------------------------------------

----------------------------------------------------------------
Prasad J Pandit (1):
      net: vmxnet3: validate configuration values during activate (CVE-2021-20203)

Zhang Chen (2):
      net/colo-compare.c: Fix ACK track reverse issue
      net/colo-compare.c: Fix incorrect return when input wrong size

 hw/net/vmxnet3.c   | 13 +++++++++++++
 net/colo-compare.c |  8 +++++---
 2 files changed, 18 insertions(+), 3 deletions(-)




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PULL 0/3] Net patches
  2021-05-26  8:24 Jason Wang
@ 2021-05-26  9:09 ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-26  9:09 UTC (permalink / raw)
  To: peter.maydell; +Cc: Jason Wang, qemu-devel

On 5/26/21 10:24 AM, Jason Wang wrote:
> The following changes since commit d90f154867ec0ec22fd719164b88716e8fd48672:
> 
>   Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.1-20210504' into staging (2021-05-05 20:29:14 +0100)
> 
> are available in the git repository at:
> 
>   https://github.com/jasowang/qemu.git tags/net-pull-request
> 
> for you to fetch changes up to 7ec0d72cd519e569b6d1ef11be770beb67dd0824:
> 
>   tap-bsd: Remove special casing for older OpenBSD releases (2021-05-26 16:20:27 +0800)
> 
> ----------------------------------------------------------------
> 
> ----------------------------------------------------------------
> Brad Smith (1):
>       tap-bsd: Remove special casing for older OpenBSD releases
> 
> Guenter Roeck (1):
>       hw/net/imx_fec: return 0xffff when accessing non-existing PHY
> 
> Laurent Vivier (1):
>       virtio-net: failover: add missing remove_migration_state_change_notifier()
> 
>  hw/net/imx_fec.c    | 8 +++-----
>  hw/net/trace-events | 2 ++
>  hw/net/virtio-net.c | 1 +
>  net/tap-bsd.c       | 8 --------
>  4 files changed, 6 insertions(+), 13 deletions(-)

UTF-8 mojibake in patch 1.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PULL 0/3] Net patches
@ 2021-05-26  8:24 Jason Wang
  2021-05-26  9:09 ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Wang @ 2021-05-26  8:24 UTC (permalink / raw)
  To: peter.maydell; +Cc: Jason Wang, qemu-devel

The following changes since commit d90f154867ec0ec22fd719164b88716e8fd48672:

  Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.1-20210504' into staging (2021-05-05 20:29:14 +0100)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to 7ec0d72cd519e569b6d1ef11be770beb67dd0824:

  tap-bsd: Remove special casing for older OpenBSD releases (2021-05-26 16:20:27 +0800)

----------------------------------------------------------------

----------------------------------------------------------------
Brad Smith (1):
      tap-bsd: Remove special casing for older OpenBSD releases

Guenter Roeck (1):
      hw/net/imx_fec: return 0xffff when accessing non-existing PHY

Laurent Vivier (1):
      virtio-net: failover: add missing remove_migration_state_change_notifier()

 hw/net/imx_fec.c    | 8 +++-----
 hw/net/trace-events | 2 ++
 hw/net/virtio-net.c | 1 +
 net/tap-bsd.c       | 8 --------
 4 files changed, 6 insertions(+), 13 deletions(-)




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-11-21 15:14 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-26  8:50 [PULL 0/3] Net patches Jason Wang
2022-07-26  8:50 ` [PULL 1/3] e1000e: Fix possible interrupt loss when using MSI Jason Wang
2022-07-26  8:50 ` [PULL 2/3] vhost: Get vring base from vq, not svq Jason Wang
2022-07-26  8:50 ` [PULL 3/3] vdpa: Fix memory listener deletions of iova tree Jason Wang
2022-07-28  6:14   ` Lei Yang
2022-07-28  6:26     ` Jason Wang
2022-07-28  8:29       ` Lei Yang
2022-07-26 12:28 ` [PULL 0/3] Net patches Peter Maydell
  -- strict thread matches above, loose matches on Subject: below --
2023-11-21  9:57 Jason Wang
2023-11-21 15:12 ` Stefan Hajnoczi
2021-11-19  4:03 Jason Wang
2021-11-19 10:01 ` Richard Henderson
2021-05-26  8:24 Jason Wang
2021-05-26  9:09 ` Philippe Mathieu-Daudé

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.