All of lore.kernel.org
 help / color / mirror / Atom feed
* virtio on Xen cannot work
@ 2014-11-24  1:58 Wen Congyang
  2014-11-24  8:52 ` qemu crash with virtio on Xen domUs (backtrace included) Fabio Fantoni
  2014-11-24  8:52 ` [Qemu-devel] " Fabio Fantoni
  0 siblings, 2 replies; 27+ messages in thread
From: Wen Congyang @ 2014-11-24  1:58 UTC (permalink / raw)
  To: xen devel

When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
(gdb) bt
#0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
#1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
#2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
#3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
#4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
#5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
#6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
#7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
#8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
#9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
#10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
#11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
#12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
#13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
#14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
    at /root/work/xen/tools/qemu-xen-dir/memory.c:478
#15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
#16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
#17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
#18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
#19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
#20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
#21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
#22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
#23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
#24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
#25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
#26 0x00007f495ae1386f in main_loop () at vl.c:2056
#27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
(gdb) q

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [Qemu-devel] qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24  1:58 virtio on Xen cannot work Wen Congyang
  2014-11-24  8:52 ` qemu crash with virtio on Xen domUs (backtrace included) Fabio Fantoni
@ 2014-11-24  8:52 ` Fabio Fantoni
  2014-11-24  9:25   ` Wen Congyang
  2014-11-24  9:25   ` [Qemu-devel] " Wen Congyang
  1 sibling, 2 replies; 27+ messages in thread
From: Fabio Fantoni @ 2014-11-24  8:52 UTC (permalink / raw)
  To: Wen Congyang, xen devel, qemu-devel; +Cc: anthony PERARD, Stefano Stabellini

Il 24/11/2014 02:58, Wen Congyang ha scritto:
> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
> (gdb) bt
> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
> (gdb) q
>
>
Added qemu-devel and qemu maintainer in xen to cc.

@Wen Congyang: when you report a bug is useful add more details and 
logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...

^ permalink raw reply	[flat|nested] 27+ messages in thread

* qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24  1:58 virtio on Xen cannot work Wen Congyang
@ 2014-11-24  8:52 ` Fabio Fantoni
  2014-11-24  8:52 ` [Qemu-devel] " Fabio Fantoni
  1 sibling, 0 replies; 27+ messages in thread
From: Fabio Fantoni @ 2014-11-24  8:52 UTC (permalink / raw)
  To: Wen Congyang, xen devel, qemu-devel; +Cc: anthony PERARD, Stefano Stabellini

Il 24/11/2014 02:58, Wen Congyang ha scritto:
> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
> (gdb) bt
> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
> (gdb) q
>
>
Added qemu-devel and qemu maintainer in xen to cc.

@Wen Congyang: when you report a bug is useful add more details and 
logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24  8:52 ` [Qemu-devel] " Fabio Fantoni
  2014-11-24  9:25   ` Wen Congyang
@ 2014-11-24  9:25   ` Wen Congyang
  2014-11-24 15:23     ` Stefano Stabellini
  2014-11-24 15:23     ` [Qemu-devel] " Stefano Stabellini
  1 sibling, 2 replies; 27+ messages in thread
From: Wen Congyang @ 2014-11-24  9:25 UTC (permalink / raw)
  To: Fabio Fantoni, xen devel, qemu-devel; +Cc: anthony PERARD, Stefano Stabellini

On 11/24/2014 04:52 PM, Fabio Fantoni wrote:
> Il 24/11/2014 02:58, Wen Congyang ha scritto:
>> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
>> (gdb) bt
>> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
>> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
>> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
>> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
>> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
>> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
>> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
>> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
>> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
>> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
>> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
>> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
>> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
>> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
>> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
>>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
>> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
>> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
>> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
>> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
>> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
>> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
>> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
>> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
>> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
>> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
>> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
>> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
>> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
>> (gdb) q
>>
>>
> Added qemu-devel and qemu maintainer in xen to cc.
> 
> @Wen Congyang: when you report a bug is useful add more details and logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...
> .
> 

The config file is not backuped before changing. I remember I only change vcpus and nic model.
Here is the config file:
===================================================
builder='hvm'

memory = 2048
vcpus=2
cpus="3"

name = "hvm_nopv"

disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,target=/data/images/xen/hvm_nopv/suse/hvm.img'
#      , 'format=raw,devtype=disk,access=w,vdev=hdb,target=/data/images/xen/hvm_nopv/suse/hvm_data.img'
       ]

vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=virtio-net' ]

#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
boot="c"

sdl=0

vnc=1

vnclisten='0.0.0.0'

vncunused = 1

stdvga = 0

serial='pty'

apic=1
apci=1
pae=1

extid=0
keymap="en-us"
localtime=1
hpet=1

usbdevice='tablet'

xen_platform_pci=0
===================================================

qemu log(/var/log/xen/qemu-xxx):
char device redirected to /dev/pts/2 (label serial0)
xen_ram_addr_from_mapcache, could not find 0x7f267bd828e8

qemu version:
qemu-upstream-unstable:
http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=summary
commit: ca78cc83650b535d7e24baeaea32947be0141534

xl: not the newest, commit c90a755f261b8ccb3dac7e1f695122cac95c6053. I change
some codes(remus related/suspend/resume...)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24  8:52 ` [Qemu-devel] " Fabio Fantoni
@ 2014-11-24  9:25   ` Wen Congyang
  2014-11-24  9:25   ` [Qemu-devel] " Wen Congyang
  1 sibling, 0 replies; 27+ messages in thread
From: Wen Congyang @ 2014-11-24  9:25 UTC (permalink / raw)
  To: Fabio Fantoni, xen devel, qemu-devel; +Cc: anthony PERARD, Stefano Stabellini

On 11/24/2014 04:52 PM, Fabio Fantoni wrote:
> Il 24/11/2014 02:58, Wen Congyang ha scritto:
>> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
>> (gdb) bt
>> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
>> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
>> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
>> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
>> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
>> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
>> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
>> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
>> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
>> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
>> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
>> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
>> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
>> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
>> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
>>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
>> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
>> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
>> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
>> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
>> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
>> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
>> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
>> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
>> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
>> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
>> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
>> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
>> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
>> (gdb) q
>>
>>
> Added qemu-devel and qemu maintainer in xen to cc.
> 
> @Wen Congyang: when you report a bug is useful add more details and logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...
> .
> 

The config file is not backuped before changing. I remember I only change vcpus and nic model.
Here is the config file:
===================================================
builder='hvm'

memory = 2048
vcpus=2
cpus="3"

name = "hvm_nopv"

disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,target=/data/images/xen/hvm_nopv/suse/hvm.img'
#      , 'format=raw,devtype=disk,access=w,vdev=hdb,target=/data/images/xen/hvm_nopv/suse/hvm_data.img'
       ]

vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=virtio-net' ]

#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
boot="c"

sdl=0

vnc=1

vnclisten='0.0.0.0'

vncunused = 1

stdvga = 0

serial='pty'

apic=1
apci=1
pae=1

extid=0
keymap="en-us"
localtime=1
hpet=1

usbdevice='tablet'

xen_platform_pci=0
===================================================

qemu log(/var/log/xen/qemu-xxx):
char device redirected to /dev/pts/2 (label serial0)
xen_ram_addr_from_mapcache, could not find 0x7f267bd828e8

qemu version:
qemu-upstream-unstable:
http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=summary
commit: ca78cc83650b535d7e24baeaea32947be0141534

xl: not the newest, commit c90a755f261b8ccb3dac7e1f695122cac95c6053. I change
some codes(remus related/suspend/resume...)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24  9:25   ` [Qemu-devel] " Wen Congyang
  2014-11-24 15:23     ` Stefano Stabellini
@ 2014-11-24 15:23     ` Stefano Stabellini
  2014-11-24 17:32       ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
  2014-11-24 17:32       ` Stefano Stabellini
  1 sibling, 2 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 15:23 UTC (permalink / raw)
  To: Wen Congyang
  Cc: Stefano Stabellini, qemu-devel, xen devel, Fabio Fantoni,
	anthony PERARD, Paolo Bonzini

CC'ing Paolo.


Wen,
thanks for the logs.

I investigated a little bit and it seems to me that the bug occurs when
QEMU tries to unmap only a portion of a memory region previously mapped.
That doesn't work with xen-mapcache.

See these logs for example:

DEBUG address_space_map phys_addr=78ed8b44 vaddr=7f9609bedb44 len=0xa
DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6

that leads to the error:

xen_ram_addr_from_mapcache, could not find 0x7fab50afbb68


Paolo, do you know why virtio would call address_space_unmap with a
different set of arguments compared to the previous address_space_map
call?


On Mon, 24 Nov 2014, Wen Congyang wrote:
> On 11/24/2014 04:52 PM, Fabio Fantoni wrote:
> > Il 24/11/2014 02:58, Wen Congyang ha scritto:
> >> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
> >> (gdb) bt
> >> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
> >> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
> >> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
> >> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
> >> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
> >> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
> >> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
> >> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
> >> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
> >> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
> >> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
> >> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
> >> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
> >> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
> >> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
> >>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
> >> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
> >> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
> >> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
> >> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
> >> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
> >> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
> >> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
> >> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
> >> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
> >> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
> >> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
> >> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
> >> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
> >> (gdb) q
> >>
> >>
> > Added qemu-devel and qemu maintainer in xen to cc.
> > 
> > @Wen Congyang: when you report a bug is useful add more details and logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...
> > .
> > 
> 
> The config file is not backuped before changing. I remember I only change vcpus and nic model.
> Here is the config file:
> ===================================================
> builder='hvm'
> 
> memory = 2048
> vcpus=2
> cpus="3"
> 
> name = "hvm_nopv"
> 
> disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,target=/data/images/xen/hvm_nopv/suse/hvm.img'
> #      , 'format=raw,devtype=disk,access=w,vdev=hdb,target=/data/images/xen/hvm_nopv/suse/hvm_data.img'
>        ]
> 
> vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=virtio-net' ]
> 
> #-----------------------------------------------------------------------------
> # boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
> # default: hard disk, cd-rom, floppy
> boot="c"
> 
> sdl=0
> 
> vnc=1
> 
> vnclisten='0.0.0.0'
> 
> vncunused = 1
> 
> stdvga = 0
> 
> serial='pty'
> 
> apic=1
> apci=1
> pae=1
> 
> extid=0
> keymap="en-us"
> localtime=1
> hpet=1
> 
> usbdevice='tablet'
> 
> xen_platform_pci=0
> ===================================================
> 
> qemu log(/var/log/xen/qemu-xxx):
> char device redirected to /dev/pts/2 (label serial0)
> xen_ram_addr_from_mapcache, could not find 0x7f267bd828e8
> 
> qemu version:
> qemu-upstream-unstable:
> http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=summary
> commit: ca78cc83650b535d7e24baeaea32947be0141534
> 
> xl: not the newest, commit c90a755f261b8ccb3dac7e1f695122cac95c6053. I change
> some codes(remus related/suspend/resume...)
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24  9:25   ` [Qemu-devel] " Wen Congyang
@ 2014-11-24 15:23     ` Stefano Stabellini
  2014-11-24 15:23     ` [Qemu-devel] " Stefano Stabellini
  1 sibling, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 15:23 UTC (permalink / raw)
  To: Wen Congyang
  Cc: Stefano Stabellini, qemu-devel, xen devel, Fabio Fantoni,
	anthony PERARD, Paolo Bonzini

CC'ing Paolo.


Wen,
thanks for the logs.

I investigated a little bit and it seems to me that the bug occurs when
QEMU tries to unmap only a portion of a memory region previously mapped.
That doesn't work with xen-mapcache.

See these logs for example:

DEBUG address_space_map phys_addr=78ed8b44 vaddr=7f9609bedb44 len=0xa
DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6

that leads to the error:

xen_ram_addr_from_mapcache, could not find 0x7fab50afbb68


Paolo, do you know why virtio would call address_space_unmap with a
different set of arguments compared to the previous address_space_map
call?


On Mon, 24 Nov 2014, Wen Congyang wrote:
> On 11/24/2014 04:52 PM, Fabio Fantoni wrote:
> > Il 24/11/2014 02:58, Wen Congyang ha scritto:
> >> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
> >> (gdb) bt
> >> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
> >> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
> >> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
> >> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
> >> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
> >> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
> >> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
> >> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
> >> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
> >> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
> >> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
> >> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
> >> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
> >> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
> >> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
> >>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
> >> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
> >> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
> >> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
> >> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
> >> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
> >> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
> >> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
> >> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
> >> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
> >> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
> >> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
> >> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
> >> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
> >> (gdb) q
> >>
> >>
> > Added qemu-devel and qemu maintainer in xen to cc.
> > 
> > @Wen Congyang: when you report a bug is useful add more details and logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...
> > .
> > 
> 
> The config file is not backuped before changing. I remember I only change vcpus and nic model.
> Here is the config file:
> ===================================================
> builder='hvm'
> 
> memory = 2048
> vcpus=2
> cpus="3"
> 
> name = "hvm_nopv"
> 
> disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,target=/data/images/xen/hvm_nopv/suse/hvm.img'
> #      , 'format=raw,devtype=disk,access=w,vdev=hdb,target=/data/images/xen/hvm_nopv/suse/hvm_data.img'
>        ]
> 
> vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=virtio-net' ]
> 
> #-----------------------------------------------------------------------------
> # boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
> # default: hard disk, cd-rom, floppy
> boot="c"
> 
> sdl=0
> 
> vnc=1
> 
> vnclisten='0.0.0.0'
> 
> vncunused = 1
> 
> stdvga = 0
> 
> serial='pty'
> 
> apic=1
> apci=1
> pae=1
> 
> extid=0
> keymap="en-us"
> localtime=1
> hpet=1
> 
> usbdevice='tablet'
> 
> xen_platform_pci=0
> ===================================================
> 
> qemu log(/var/log/xen/qemu-xxx):
> char device redirected to /dev/pts/2 (label serial0)
> xen_ram_addr_from_mapcache, could not find 0x7f267bd828e8
> 
> qemu version:
> qemu-upstream-unstable:
> http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=summary
> commit: ca78cc83650b535d7e24baeaea32947be0141534
> 
> xl: not the newest, commit c90a755f261b8ccb3dac7e1f695122cac95c6053. I change
> some codes(remus related/suspend/resume...)
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 15:23     ` [Qemu-devel] " Stefano Stabellini
@ 2014-11-24 17:32       ` Stefano Stabellini
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
  2014-11-24 18:44         ` Stefano Stabellini
  2014-11-24 17:32       ` Stefano Stabellini
  1 sibling, 2 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 17:32 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, xen devel, Fabio Fantoni, anthony PERARD, Paolo Bonzini

On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> CC'ing Paolo.
> 
> 
> Wen,
> thanks for the logs.
> 
> I investigated a little bit and it seems to me that the bug occurs when
> QEMU tries to unmap only a portion of a memory region previously mapped.
> That doesn't work with xen-mapcache.
> 
> See these logs for example:
> 
> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6

Sorry the logs don't quite match, it was supposed to be:

DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6



> that leads to the error:
> 
> xen_ram_addr_from_mapcache, could not find 0x7fab50afbb68
> 
> 
> Paolo, do you know why virtio would call address_space_unmap with a
> different set of arguments compared to the previous address_space_map
> call?
> 
> 
> On Mon, 24 Nov 2014, Wen Congyang wrote:
> > On 11/24/2014 04:52 PM, Fabio Fantoni wrote:
> > > Il 24/11/2014 02:58, Wen Congyang ha scritto:
> > >> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
> > >> (gdb) bt
> > >> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
> > >> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
> > >> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
> > >> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
> > >> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
> > >> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
> > >> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
> > >> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
> > >> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
> > >> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
> > >> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
> > >> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
> > >> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
> > >> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
> > >> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
> > >>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
> > >> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
> > >> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
> > >> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
> > >> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
> > >> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
> > >> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
> > >> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
> > >> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
> > >> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
> > >> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
> > >> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
> > >> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
> > >> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
> > >> (gdb) q
> > >>
> > >>
> > > Added qemu-devel and qemu maintainer in xen to cc.
> > > 
> > > @Wen Congyang: when you report a bug is useful add more details and logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...
> > > .
> > > 
> > 
> > The config file is not backuped before changing. I remember I only change vcpus and nic model.
> > Here is the config file:
> > ===================================================
> > builder='hvm'
> > 
> > memory = 2048
> > vcpus=2
> > cpus="3"
> > 
> > name = "hvm_nopv"
> > 
> > disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,target=/data/images/xen/hvm_nopv/suse/hvm.img'
> > #      , 'format=raw,devtype=disk,access=w,vdev=hdb,target=/data/images/xen/hvm_nopv/suse/hvm_data.img'
> >        ]
> > 
> > vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=virtio-net' ]
> > 
> > #-----------------------------------------------------------------------------
> > # boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
> > # default: hard disk, cd-rom, floppy
> > boot="c"
> > 
> > sdl=0
> > 
> > vnc=1
> > 
> > vnclisten='0.0.0.0'
> > 
> > vncunused = 1
> > 
> > stdvga = 0
> > 
> > serial='pty'
> > 
> > apic=1
> > apci=1
> > pae=1
> > 
> > extid=0
> > keymap="en-us"
> > localtime=1
> > hpet=1
> > 
> > usbdevice='tablet'
> > 
> > xen_platform_pci=0
> > ===================================================
> > 
> > qemu log(/var/log/xen/qemu-xxx):
> > char device redirected to /dev/pts/2 (label serial0)
> > xen_ram_addr_from_mapcache, could not find 0x7f267bd828e8
> > 
> > qemu version:
> > qemu-upstream-unstable:
> > http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=summary
> > commit: ca78cc83650b535d7e24baeaea32947be0141534
> > 
> > xl: not the newest, commit c90a755f261b8ccb3dac7e1f695122cac95c6053. I change
> > some codes(remus related/suspend/resume...)
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 15:23     ` [Qemu-devel] " Stefano Stabellini
  2014-11-24 17:32       ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
@ 2014-11-24 17:32       ` Stefano Stabellini
  1 sibling, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 17:32 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Wen Congyang, qemu-devel, xen devel, Fabio Fantoni,
	anthony PERARD, Paolo Bonzini

On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> CC'ing Paolo.
> 
> 
> Wen,
> thanks for the logs.
> 
> I investigated a little bit and it seems to me that the bug occurs when
> QEMU tries to unmap only a portion of a memory region previously mapped.
> That doesn't work with xen-mapcache.
> 
> See these logs for example:
> 
> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6

Sorry the logs don't quite match, it was supposed to be:

DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6



> that leads to the error:
> 
> xen_ram_addr_from_mapcache, could not find 0x7fab50afbb68
> 
> 
> Paolo, do you know why virtio would call address_space_unmap with a
> different set of arguments compared to the previous address_space_map
> call?
> 
> 
> On Mon, 24 Nov 2014, Wen Congyang wrote:
> > On 11/24/2014 04:52 PM, Fabio Fantoni wrote:
> > > Il 24/11/2014 02:58, Wen Congyang ha scritto:
> > >> When I try to use virtio on xen(HVM guest), qemu crashed. Here is the backtrace:
> > >> (gdb) bt
> > >> #0  0x00007f49581f0b55 in raise () from /lib64/libc.so.6
> > >> #1  0x00007f49581f2131 in abort () from /lib64/libc.so.6
> > >> #2  0x00007f495af2af32 in xen_ram_addr_from_mapcache (ptr=0x7f4951858ac8) at /root/work/xen/tools/qemu-xen-dir/xen-mapcache.c:316
> > >> #3  0x00007f495ae30fb3 in qemu_ram_addr_from_host (ptr=0x7f4951858ac8, ram_addr=0x7fff564dc9b0) at /root/work/xen/tools/qemu-xen-dir/exec.c:1508
> > >> #4  0x00007f495ae33424 in address_space_unmap (as=0x7f495b7c3520, buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2315
> > >> #5  0x00007f495ae335b3 in cpu_physical_memory_unmap (buffer=0x7f4951858ac8, len=6, is_write=0, access_len=6) at /root/work/xen/tools/qemu-xen-dir/exec.c:2353
> > >> #6  0x00007f495ae9058d in virtqueue_fill (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1, idx=0) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:258
> > >> #7  0x00007f495ae90a0d in virtqueue_push (vq=0x7f495b931250, elem=0x7fff564dcb00, len=1) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:286
> > >> #8  0x00007f495ae82cf3 in virtio_net_handle_ctrl (vdev=0x7f495b92a5d0, vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/net/virtio-net.c:806
> > >> #9  0x00007f495ae925e5 in virtio_queue_notify_vq (vq=0x7f495b931250) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:729
> > >> #10 0x00007f495ae926c3 in virtio_queue_notify (vdev=0x7f495b92a5d0, n=2) at /root/work/xen/tools/qemu-xen-dir/hw/virtio/virtio.c:735
> > >> #11 0x00007f495ad743c2 in virtio_ioport_write (opaque=0x7f495b929cd0, addr=16, val=2) at hw/virtio/virtio-pci.c:301
> > >> #12 0x00007f495ad74923 in virtio_pci_config_write (opaque=0x7f495b929cd0, addr=16, val=2, size=2) at hw/virtio/virtio-pci.c:433
> > >> #13 0x00007f495ae9f071 in memory_region_write_accessor (mr=0x7f495b92a468, addr=16, value=0x7fff564e8d08, size=2, shift=0, mask=65535) at /root/work/xen/tools/qemu-xen-dir/memory.c:441
> > >> #14 0x00007f495ae9f1ad in access_with_adjusted_size (addr=16, value=0x7fff564e8d08, size=2, access_size_min=1, access_size_max=4, access=0x7f495ae9efe8 <memory_region_write_accessor>, mr=0x7f495b92a468)
> > >>      at /root/work/xen/tools/qemu-xen-dir/memory.c:478
> > >> #15 0x00007f495aea200e in memory_region_dispatch_write (mr=0x7f495b92a468, addr=16, data=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:985
> > >> #16 0x00007f495aea5824 in io_mem_write (mr=0x7f495b92a468, addr=16, val=2, size=2) at /root/work/xen/tools/qemu-xen-dir/memory.c:1744
> > >> #17 0x00007f495ae328d3 in address_space_rw (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2, is_write=true) at /root/work/xen/tools/qemu-xen-dir/exec.c:2029
> > >> #18 0x00007f495ae32c85 in address_space_write (as=0x7f495b7c3600, addr=49200, buf=0x7fff564e8e60 "\002", len=2) at /root/work/xen/tools/qemu-xen-dir/exec.c:2091
> > >> #19 0x00007f495ae9c130 in cpu_outw (addr=49200, val=2) at /root/work/xen/tools/qemu-xen-dir/ioport.c:77
> > >> #20 0x00007f495af289d0 in do_outp (addr=49200, size=2, val=2) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:668
> > >> #21 0x00007f495af28b94 in cpu_ioreq_pio (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:729
> > >> #22 0x00007f495af28ee5 in handle_ioreq (req=0x7f495ab25000) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:781
> > >> #23 0x00007f495af29237 in cpu_handle_ioreq (opaque=0x7f495b884ad0) at /root/work/xen/tools/qemu-xen-dir/xen-hvm.c:856
> > >> #24 0x00007f495ad7d2c2 in qemu_iohandler_poll (pollfds=0x7f495b823820, ret=1) at iohandler.c:143
> > >> #25 0x00007f495ad7e2fd in main_loop_wait (nonblocking=0) at main-loop.c:485
> > >> #26 0x00007f495ae1386f in main_loop () at vl.c:2056
> > >> #27 0x00007f495ae1af17 in main (argc=35, argv=0x7fff564e94c8, envp=0x7fff564e95e8) at vl.c:4535
> > >> (gdb) q
> > >>
> > >>
> > > Added qemu-devel and qemu maintainer in xen to cc.
> > > 
> > > @Wen Congyang: when you report a bug is useful add more details and logs, domU's xl cfg, domU's qemu log, xen and qemu version used ecc...
> > > .
> > > 
> > 
> > The config file is not backuped before changing. I remember I only change vcpus and nic model.
> > Here is the config file:
> > ===================================================
> > builder='hvm'
> > 
> > memory = 2048
> > vcpus=2
> > cpus="3"
> > 
> > name = "hvm_nopv"
> > 
> > disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,target=/data/images/xen/hvm_nopv/suse/hvm.img'
> > #      , 'format=raw,devtype=disk,access=w,vdev=hdb,target=/data/images/xen/hvm_nopv/suse/hvm_data.img'
> >        ]
> > 
> > vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=virtio-net' ]
> > 
> > #-----------------------------------------------------------------------------
> > # boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
> > # default: hard disk, cd-rom, floppy
> > boot="c"
> > 
> > sdl=0
> > 
> > vnc=1
> > 
> > vnclisten='0.0.0.0'
> > 
> > vncunused = 1
> > 
> > stdvga = 0
> > 
> > serial='pty'
> > 
> > apic=1
> > apci=1
> > pae=1
> > 
> > extid=0
> > keymap="en-us"
> > localtime=1
> > hpet=1
> > 
> > usbdevice='tablet'
> > 
> > xen_platform_pci=0
> > ===================================================
> > 
> > qemu log(/var/log/xen/qemu-xxx):
> > char device redirected to /dev/pts/2 (label serial0)
> > xen_ram_addr_from_mapcache, could not find 0x7f267bd828e8
> > 
> > qemu version:
> > qemu-upstream-unstable:
> > http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=summary
> > commit: ca78cc83650b535d7e24baeaea32947be0141534
> > 
> > xl: not the newest, commit c90a755f261b8ccb3dac7e1f695122cac95c6053. I change
> > some codes(remus related/suspend/resume...)
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [Qemu-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 17:32       ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
@ 2014-11-24 18:44         ` Stefano Stabellini
  2014-11-24 18:52           ` [Qemu-devel] [Xen-devel] " Konrad Rzeszutek Wilk
                             ` (5 more replies)
  2014-11-24 18:44         ` Stefano Stabellini
  1 sibling, 6 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 18:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, Stefano Stabellini, xen devel, Fabio Fantoni, aliguori,
	anthony PERARD, Paolo Bonzini

On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > CC'ing Paolo.
> > 
> > 
> > Wen,
> > thanks for the logs.
> > 
> > I investigated a little bit and it seems to me that the bug occurs when
> > QEMU tries to unmap only a portion of a memory region previously mapped.
> > That doesn't work with xen-mapcache.
> > 
> > See these logs for example:
> > 
> > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> 
> Sorry the logs don't quite match, it was supposed to be:
> 
> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6

It looks like the problem is caused by iov_discard_front, called by
virtio_net_handle_ctrl. By changing iov_base after the sg has already
been mapped (cpu_physical_memory_map), it causes a leak in the mapping
because the corresponding cpu_physical_memory_unmap will only unmap a
portion of the original sg.  On Xen the problem is worse because
xen-mapcache aborts.

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 2ac6ce5..b2b5c2d 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
     struct iovec *iov;
     unsigned int iov_cnt;
 
-    while (virtqueue_pop(vq, &elem)) {
+    while (virtqueue_pop_nomap(vq, &elem)) {
         if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
             iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
             error_report("virtio-net ctrl missing headers");
@@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
 
         iov = elem.out_sg;
         iov_cnt = elem.out_num;
-        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
+
+        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
+        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
+
+        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         if (s != sizeof(ctrl)) {
             status = VIRTIO_NET_ERR;
         } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 3e4b70c..6a4bd3a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
     }
 }
 
-int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
+int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
 {
     unsigned int i, head, max;
     hwaddr desc_pa = vq->vring.desc;
@@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
         }
     } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
 
-    /* Now map what we have collected */
-    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
-    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
 
     elem->index = head;
 
@@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
     return elem->in_num + elem->out_num;
 }
 
+int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
+{
+    int rc = virtqueue_pop_nomap(vq, elem);
+    if (rc > 0) {
+        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
+        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
+    }
+    return rc;
+}
+
 /* virtio device */
 static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
 {
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 3e54e90..40a3977 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
 void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
     size_t num_sg, int is_write);
 int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
+int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
 int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
                           unsigned int out_bytes);
 void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 17:32       ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
@ 2014-11-24 18:44         ` Stefano Stabellini
  1 sibling, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 18:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Wen Congyang, mst, Stefano Stabellini, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini

On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > CC'ing Paolo.
> > 
> > 
> > Wen,
> > thanks for the logs.
> > 
> > I investigated a little bit and it seems to me that the bug occurs when
> > QEMU tries to unmap only a portion of a memory region previously mapped.
> > That doesn't work with xen-mapcache.
> > 
> > See these logs for example:
> > 
> > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> 
> Sorry the logs don't quite match, it was supposed to be:
> 
> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6

It looks like the problem is caused by iov_discard_front, called by
virtio_net_handle_ctrl. By changing iov_base after the sg has already
been mapped (cpu_physical_memory_map), it causes a leak in the mapping
because the corresponding cpu_physical_memory_unmap will only unmap a
portion of the original sg.  On Xen the problem is worse because
xen-mapcache aborts.

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 2ac6ce5..b2b5c2d 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
     struct iovec *iov;
     unsigned int iov_cnt;
 
-    while (virtqueue_pop(vq, &elem)) {
+    while (virtqueue_pop_nomap(vq, &elem)) {
         if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
             iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
             error_report("virtio-net ctrl missing headers");
@@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
 
         iov = elem.out_sg;
         iov_cnt = elem.out_num;
-        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
+
+        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
+        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
+
+        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         if (s != sizeof(ctrl)) {
             status = VIRTIO_NET_ERR;
         } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 3e4b70c..6a4bd3a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
     }
 }
 
-int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
+int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
 {
     unsigned int i, head, max;
     hwaddr desc_pa = vq->vring.desc;
@@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
         }
     } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
 
-    /* Now map what we have collected */
-    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
-    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
 
     elem->index = head;
 
@@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
     return elem->in_num + elem->out_num;
 }
 
+int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
+{
+    int rc = virtqueue_pop_nomap(vq, elem);
+    if (rc > 0) {
+        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
+        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
+    }
+    return rc;
+}
+
 /* virtio device */
 static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
 {
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 3e54e90..40a3977 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
 void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
     size_t num_sg, int is_write);
 int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
+int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
 int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
                           unsigned int out_bytes);
 void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
@ 2014-11-24 18:52           ` Konrad Rzeszutek Wilk
  2014-11-24 19:01             ` Stefano Stabellini
  2014-11-24 19:01             ` Stefano Stabellini
  2014-11-24 18:52           ` Konrad Rzeszutek Wilk
                             ` (4 subsequent siblings)
  5 siblings, 2 replies; 27+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-11-24 18:52 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: mst, qemu-devel, xen devel, Fabio Fantoni, aliguori,
	anthony PERARD, Paolo Bonzini

On Mon, Nov 24, 2014 at 06:44:45PM +0000, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > > CC'ing Paolo.
> > > 
> > > 
> > > Wen,
> > > thanks for the logs.
> > > 
> > > I investigated a little bit and it seems to me that the bug occurs when
> > > QEMU tries to unmap only a portion of a memory region previously mapped.
> > > That doesn't work with xen-mapcache.
> > > 
> > > See these logs for example:
> > > 
> > > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> > > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > 
> > Sorry the logs don't quite match, it was supposed to be:
> > 
> > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> 
> It looks like the problem is caused by iov_discard_front, called by
> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> because the corresponding cpu_physical_memory_unmap will only unmap a
> portion of the original sg.  On Xen the problem is worse because
> xen-mapcache aborts.

Didn't um Andy post patches for ths:

http://lists.xen.org/archives/html/xen-devel/2014-09/msg02864.html

> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 2ac6ce5..b2b5c2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>      struct iovec *iov;
>      unsigned int iov_cnt;
>  
> -    while (virtqueue_pop(vq, &elem)) {
> +    while (virtqueue_pop_nomap(vq, &elem)) {
>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>              error_report("virtio-net ctrl missing headers");
> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>  
>          iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> +
> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> +
> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          if (s != sizeof(ctrl)) {
>              status = VIRTIO_NET_ERR;
>          } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 3e4b70c..6a4bd3a 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      }
>  }
>  
> -int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
>  {
>      unsigned int i, head, max;
>      hwaddr desc_pa = vq->vring.desc;
> @@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>          }
>      } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
>  
> -    /* Now map what we have collected */
> -    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> -    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
>  
>      elem->index = head;
>  
> @@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>      return elem->in_num + elem->out_num;
>  }
>  
> +int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +{
> +    int rc = virtqueue_pop_nomap(vq, elem);
> +    if (rc > 0) {
> +        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> +        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> +    }
> +    return rc;
> +}
> +
>  /* virtio device */
>  static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
>  {
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 3e54e90..40a3977 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
>  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      size_t num_sg, int is_write);
>  int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
>  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>                            unsigned int out_bytes);
>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
  2014-11-24 18:52           ` [Qemu-devel] [Xen-devel] " Konrad Rzeszutek Wilk
@ 2014-11-24 18:52           ` Konrad Rzeszutek Wilk
  2014-11-25  1:32           ` Wen Congyang
                             ` (3 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-11-24 18:52 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Wen Congyang, mst, qemu-devel, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini

On Mon, Nov 24, 2014 at 06:44:45PM +0000, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > > CC'ing Paolo.
> > > 
> > > 
> > > Wen,
> > > thanks for the logs.
> > > 
> > > I investigated a little bit and it seems to me that the bug occurs when
> > > QEMU tries to unmap only a portion of a memory region previously mapped.
> > > That doesn't work with xen-mapcache.
> > > 
> > > See these logs for example:
> > > 
> > > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> > > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > 
> > Sorry the logs don't quite match, it was supposed to be:
> > 
> > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> 
> It looks like the problem is caused by iov_discard_front, called by
> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> because the corresponding cpu_physical_memory_unmap will only unmap a
> portion of the original sg.  On Xen the problem is worse because
> xen-mapcache aborts.

Didn't um Andy post patches for ths:

http://lists.xen.org/archives/html/xen-devel/2014-09/msg02864.html

> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 2ac6ce5..b2b5c2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>      struct iovec *iov;
>      unsigned int iov_cnt;
>  
> -    while (virtqueue_pop(vq, &elem)) {
> +    while (virtqueue_pop_nomap(vq, &elem)) {
>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>              error_report("virtio-net ctrl missing headers");
> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>  
>          iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> +
> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> +
> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          if (s != sizeof(ctrl)) {
>              status = VIRTIO_NET_ERR;
>          } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 3e4b70c..6a4bd3a 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      }
>  }
>  
> -int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
>  {
>      unsigned int i, head, max;
>      hwaddr desc_pa = vq->vring.desc;
> @@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>          }
>      } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
>  
> -    /* Now map what we have collected */
> -    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> -    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
>  
>      elem->index = head;
>  
> @@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>      return elem->in_num + elem->out_num;
>  }
>  
> +int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +{
> +    int rc = virtqueue_pop_nomap(vq, elem);
> +    if (rc > 0) {
> +        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> +        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> +    }
> +    return rc;
> +}
> +
>  /* virtio device */
>  static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
>  {
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 3e54e90..40a3977 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
>  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      size_t num_sg, int is_write);
>  int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
>  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>                            unsigned int out_bytes);
>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:52           ` [Qemu-devel] [Xen-devel] " Konrad Rzeszutek Wilk
@ 2014-11-24 19:01             ` Stefano Stabellini
  2014-11-24 19:01             ` Stefano Stabellini
  1 sibling, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 19:01 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: mst, Stefano Stabellini, qemu-devel, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini

On Mon, 24 Nov 2014, Konrad Rzeszutek Wilk wrote:
> On Mon, Nov 24, 2014 at 06:44:45PM +0000, Stefano Stabellini wrote:
> > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > > > CC'ing Paolo.
> > > > 
> > > > 
> > > > Wen,
> > > > thanks for the logs.
> > > > 
> > > > I investigated a little bit and it seems to me that the bug occurs when
> > > > QEMU tries to unmap only a portion of a memory region previously mapped.
> > > > That doesn't work with xen-mapcache.
> > > > 
> > > > See these logs for example:
> > > > 
> > > > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> > > > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > > 
> > > Sorry the logs don't quite match, it was supposed to be:
> > > 
> > > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> > > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > 
> > It looks like the problem is caused by iov_discard_front, called by
> > virtio_net_handle_ctrl. By changing iov_base after the sg has already
> > been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> > because the corresponding cpu_physical_memory_unmap will only unmap a
> > portion of the original sg.  On Xen the problem is worse because
> > xen-mapcache aborts.
> 
> Didn't um Andy post patches for ths:
> 
> http://lists.xen.org/archives/html/xen-devel/2014-09/msg02864.html

It looks like these are fixing a linux side issue (the frontend) while
this patch is for a bug in QEMU (the backend).


> > 
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 2ac6ce5..b2b5c2d 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >      struct iovec *iov;
> >      unsigned int iov_cnt;
> >  
> > -    while (virtqueue_pop(vq, &elem)) {
> > +    while (virtqueue_pop_nomap(vq, &elem)) {
> >          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
> >              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
> >              error_report("virtio-net ctrl missing headers");
> > @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >  
> >          iov = elem.out_sg;
> >          iov_cnt = elem.out_num;
> > -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> > +
> > +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> > +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> > +
> > +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >          if (s != sizeof(ctrl)) {
> >              status = VIRTIO_NET_ERR;
> >          } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index 3e4b70c..6a4bd3a 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
> >      }
> >  }
> >  
> > -int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> > +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
> >  {
> >      unsigned int i, head, max;
> >      hwaddr desc_pa = vq->vring.desc;
> > @@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> >          }
> >      } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
> >  
> > -    /* Now map what we have collected */
> > -    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> > -    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> >  
> >      elem->index = head;
> >  
> > @@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> >      return elem->in_num + elem->out_num;
> >  }
> >  
> > +int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> > +{
> > +    int rc = virtqueue_pop_nomap(vq, elem);
> > +    if (rc > 0) {
> > +        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> > +        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> > +    }
> > +    return rc;
> > +}
> > +
> >  /* virtio device */
> >  static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
> >  {
> > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > index 3e54e90..40a3977 100644
> > --- a/include/hw/virtio/virtio.h
> > +++ b/include/hw/virtio/virtio.h
> > @@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
> >  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
> >      size_t num_sg, int is_write);
> >  int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
> > +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
> >  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
> >                            unsigned int out_bytes);
> >  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:52           ` [Qemu-devel] [Xen-devel] " Konrad Rzeszutek Wilk
  2014-11-24 19:01             ` Stefano Stabellini
@ 2014-11-24 19:01             ` Stefano Stabellini
  1 sibling, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-24 19:01 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Wen Congyang, mst, Stefano Stabellini, qemu-devel, xen devel,
	Fabio Fantoni, aliguori, anthony PERARD, Paolo Bonzini

On Mon, 24 Nov 2014, Konrad Rzeszutek Wilk wrote:
> On Mon, Nov 24, 2014 at 06:44:45PM +0000, Stefano Stabellini wrote:
> > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> > > > CC'ing Paolo.
> > > > 
> > > > 
> > > > Wen,
> > > > thanks for the logs.
> > > > 
> > > > I investigated a little bit and it seems to me that the bug occurs when
> > > > QEMU tries to unmap only a portion of a memory region previously mapped.
> > > > That doesn't work with xen-mapcache.
> > > > 
> > > > See these logs for example:
> > > > 
> > > > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> > > > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > > 
> > > Sorry the logs don't quite match, it was supposed to be:
> > > 
> > > DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> > > DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > 
> > It looks like the problem is caused by iov_discard_front, called by
> > virtio_net_handle_ctrl. By changing iov_base after the sg has already
> > been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> > because the corresponding cpu_physical_memory_unmap will only unmap a
> > portion of the original sg.  On Xen the problem is worse because
> > xen-mapcache aborts.
> 
> Didn't um Andy post patches for ths:
> 
> http://lists.xen.org/archives/html/xen-devel/2014-09/msg02864.html

It looks like these are fixing a linux side issue (the frontend) while
this patch is for a bug in QEMU (the backend).


> > 
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 2ac6ce5..b2b5c2d 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >      struct iovec *iov;
> >      unsigned int iov_cnt;
> >  
> > -    while (virtqueue_pop(vq, &elem)) {
> > +    while (virtqueue_pop_nomap(vq, &elem)) {
> >          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
> >              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
> >              error_report("virtio-net ctrl missing headers");
> > @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >  
> >          iov = elem.out_sg;
> >          iov_cnt = elem.out_num;
> > -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> > +
> > +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> > +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> > +
> > +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >          if (s != sizeof(ctrl)) {
> >              status = VIRTIO_NET_ERR;
> >          } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index 3e4b70c..6a4bd3a 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
> >      }
> >  }
> >  
> > -int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> > +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
> >  {
> >      unsigned int i, head, max;
> >      hwaddr desc_pa = vq->vring.desc;
> > @@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> >          }
> >      } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
> >  
> > -    /* Now map what we have collected */
> > -    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> > -    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> >  
> >      elem->index = head;
> >  
> > @@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> >      return elem->in_num + elem->out_num;
> >  }
> >  
> > +int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> > +{
> > +    int rc = virtqueue_pop_nomap(vq, elem);
> > +    if (rc > 0) {
> > +        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> > +        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> > +    }
> > +    return rc;
> > +}
> > +
> >  /* virtio device */
> >  static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
> >  {
> > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > index 3e54e90..40a3977 100644
> > --- a/include/hw/virtio/virtio.h
> > +++ b/include/hw/virtio/virtio.h
> > @@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
> >  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
> >      size_t num_sg, int is_write);
> >  int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
> > +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
> >  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
> >                            unsigned int out_bytes);
> >  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
                             ` (2 preceding siblings ...)
  2014-11-25  1:32           ` Wen Congyang
@ 2014-11-25  1:32           ` Wen Congyang
  2014-11-25  6:16           ` Jason Wang
  2014-11-25  6:16           ` [Qemu-devel] [Xen-devel] " Jason Wang
  5 siblings, 0 replies; 27+ messages in thread
From: Wen Congyang @ 2014-11-25  1:32 UTC (permalink / raw)
  To: Stefano Stabellini, qemu-devel
  Cc: mst, xen devel, Fabio Fantoni, aliguori, anthony PERARD, Paolo Bonzini

On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>> CC'ing Paolo.
>>>
>>>
>>> Wen,
>>> thanks for the logs.
>>>
>>> I investigated a little bit and it seems to me that the bug occurs when
>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>> That doesn't work with xen-mapcache.
>>>
>>> See these logs for example:
>>>
>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>
>> Sorry the logs don't quite match, it was supposed to be:
>>
>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> 
> It looks like the problem is caused by iov_discard_front, called by
> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> because the corresponding cpu_physical_memory_unmap will only unmap a
> portion of the original sg.  On Xen the problem is worse because
> xen-mapcache aborts.

This patch works for me.

Thanks
Wen Congyang

> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 2ac6ce5..b2b5c2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>      struct iovec *iov;
>      unsigned int iov_cnt;
>  
> -    while (virtqueue_pop(vq, &elem)) {
> +    while (virtqueue_pop_nomap(vq, &elem)) {
>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>              error_report("virtio-net ctrl missing headers");
> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>  
>          iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> +
> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> +
> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          if (s != sizeof(ctrl)) {
>              status = VIRTIO_NET_ERR;
>          } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 3e4b70c..6a4bd3a 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      }
>  }
>  
> -int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
>  {
>      unsigned int i, head, max;
>      hwaddr desc_pa = vq->vring.desc;
> @@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>          }
>      } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
>  
> -    /* Now map what we have collected */
> -    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> -    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
>  
>      elem->index = head;
>  
> @@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>      return elem->in_num + elem->out_num;
>  }
>  
> +int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +{
> +    int rc = virtqueue_pop_nomap(vq, elem);
> +    if (rc > 0) {
> +        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> +        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> +    }
> +    return rc;
> +}
> +
>  /* virtio device */
>  static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
>  {
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 3e54e90..40a3977 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
>  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      size_t num_sg, int is_write);
>  int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
>  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>                            unsigned int out_bytes);
>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> .
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
  2014-11-24 18:52           ` [Qemu-devel] [Xen-devel] " Konrad Rzeszutek Wilk
  2014-11-24 18:52           ` Konrad Rzeszutek Wilk
@ 2014-11-25  1:32           ` Wen Congyang
  2014-11-25  1:32           ` [Qemu-devel] " Wen Congyang
                             ` (2 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Wen Congyang @ 2014-11-25  1:32 UTC (permalink / raw)
  To: Stefano Stabellini, qemu-devel
  Cc: mst, xen devel, Fabio Fantoni, aliguori, anthony PERARD, Paolo Bonzini

On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>> CC'ing Paolo.
>>>
>>>
>>> Wen,
>>> thanks for the logs.
>>>
>>> I investigated a little bit and it seems to me that the bug occurs when
>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>> That doesn't work with xen-mapcache.
>>>
>>> See these logs for example:
>>>
>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>
>> Sorry the logs don't quite match, it was supposed to be:
>>
>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> 
> It looks like the problem is caused by iov_discard_front, called by
> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> because the corresponding cpu_physical_memory_unmap will only unmap a
> portion of the original sg.  On Xen the problem is worse because
> xen-mapcache aborts.

This patch works for me.

Thanks
Wen Congyang

> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 2ac6ce5..b2b5c2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>      struct iovec *iov;
>      unsigned int iov_cnt;
>  
> -    while (virtqueue_pop(vq, &elem)) {
> +    while (virtqueue_pop_nomap(vq, &elem)) {
>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>              error_report("virtio-net ctrl missing headers");
> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>  
>          iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> +
> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> +
> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          if (s != sizeof(ctrl)) {
>              status = VIRTIO_NET_ERR;
>          } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 3e4b70c..6a4bd3a 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -446,7 +446,7 @@ void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      }
>  }
>  
> -int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem)
>  {
>      unsigned int i, head, max;
>      hwaddr desc_pa = vq->vring.desc;
> @@ -505,9 +505,6 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>          }
>      } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
>  
> -    /* Now map what we have collected */
> -    virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> -    virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
>  
>      elem->index = head;
>  
> @@ -517,6 +514,16 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
>      return elem->in_num + elem->out_num;
>  }
>  
> +int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
> +{
> +    int rc = virtqueue_pop_nomap(vq, elem);
> +    if (rc > 0) {
> +        virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
> +        virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
> +    }
> +    return rc;
> +}
> +
>  /* virtio device */
>  static void virtio_notify_vector(VirtIODevice *vdev, uint16_t vector)
>  {
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 3e54e90..40a3977 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -174,6 +174,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
>  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>      size_t num_sg, int is_write);
>  int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
> +int virtqueue_pop_nomap(VirtQueue *vq, VirtQueueElement *elem);
>  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>                            unsigned int out_bytes);
>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> .
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
                             ` (4 preceding siblings ...)
  2014-11-25  6:16           ` Jason Wang
@ 2014-11-25  6:16           ` Jason Wang
  2014-11-25 13:53             ` Stefano Stabellini
  2014-11-25 13:53             ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
  5 siblings, 2 replies; 27+ messages in thread
From: Jason Wang @ 2014-11-25  6:16 UTC (permalink / raw)
  To: Stefano Stabellini, qemu-devel
  Cc: mst, xen devel, Fabio Fantoni, aliguori, anthony PERARD, Paolo Bonzini

On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>> CC'ing Paolo.
>>>
>>>
>>> Wen,
>>> thanks for the logs.
>>>
>>> I investigated a little bit and it seems to me that the bug occurs when
>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>> That doesn't work with xen-mapcache.
>>>
>>> See these logs for example:
>>>
>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>> Sorry the logs don't quite match, it was supposed to be:
>>
>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> It looks like the problem is caused by iov_discard_front, called by
> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> because the corresponding cpu_physical_memory_unmap will only unmap a
> portion of the original sg.  On Xen the problem is worse because
> xen-mapcache aborts.
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 2ac6ce5..b2b5c2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>      struct iovec *iov;
>      unsigned int iov_cnt;
>  
> -    while (virtqueue_pop(vq, &elem)) {
> +    while (virtqueue_pop_nomap(vq, &elem)) {
>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>              error_report("virtio-net ctrl missing headers");
> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>  
>          iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> +
> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> +
> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));

Does this really work? The code in fact skips the location that contains
virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
iov_discard_front().

How about copy iov to a temp variable and use it in this function?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
                             ` (3 preceding siblings ...)
  2014-11-25  1:32           ` [Qemu-devel] " Wen Congyang
@ 2014-11-25  6:16           ` Jason Wang
  2014-11-25  6:16           ` [Qemu-devel] [Xen-devel] " Jason Wang
  5 siblings, 0 replies; 27+ messages in thread
From: Jason Wang @ 2014-11-25  6:16 UTC (permalink / raw)
  To: Stefano Stabellini, qemu-devel
  Cc: Wen Congyang, mst, xen devel, Fabio Fantoni, aliguori,
	anthony PERARD, Paolo Bonzini

On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>> CC'ing Paolo.
>>>
>>>
>>> Wen,
>>> thanks for the logs.
>>>
>>> I investigated a little bit and it seems to me that the bug occurs when
>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>> That doesn't work with xen-mapcache.
>>>
>>> See these logs for example:
>>>
>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>> Sorry the logs don't quite match, it was supposed to be:
>>
>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> It looks like the problem is caused by iov_discard_front, called by
> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> because the corresponding cpu_physical_memory_unmap will only unmap a
> portion of the original sg.  On Xen the problem is worse because
> xen-mapcache aborts.
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 2ac6ce5..b2b5c2d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>      struct iovec *iov;
>      unsigned int iov_cnt;
>  
> -    while (virtqueue_pop(vq, &elem)) {
> +    while (virtqueue_pop_nomap(vq, &elem)) {
>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>              error_report("virtio-net ctrl missing headers");
> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>  
>          iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> +
> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> +
> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));

Does this really work? The code in fact skips the location that contains
virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
iov_discard_front().

How about copy iov to a temp variable and use it in this function?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-25  6:16           ` [Qemu-devel] [Xen-devel] " Jason Wang
  2014-11-25 13:53             ` Stefano Stabellini
@ 2014-11-25 13:53             ` Stefano Stabellini
  2014-11-26  5:23               ` Jason Wang
  2014-11-26  5:23               ` [Qemu-devel] [Xen-devel] " Jason Wang
  1 sibling, 2 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-25 13:53 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, Stefano Stabellini, qemu-devel, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini

On Tue, 25 Nov 2014, Jason Wang wrote:
> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >>> CC'ing Paolo.
> >>>
> >>>
> >>> Wen,
> >>> thanks for the logs.
> >>>
> >>> I investigated a little bit and it seems to me that the bug occurs when
> >>> QEMU tries to unmap only a portion of a memory region previously mapped.
> >>> That doesn't work with xen-mapcache.
> >>>
> >>> See these logs for example:
> >>>
> >>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> >>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> >> Sorry the logs don't quite match, it was supposed to be:
> >>
> >> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> >> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > It looks like the problem is caused by iov_discard_front, called by
> > virtio_net_handle_ctrl. By changing iov_base after the sg has already
> > been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> > because the corresponding cpu_physical_memory_unmap will only unmap a
> > portion of the original sg.  On Xen the problem is worse because
> > xen-mapcache aborts.
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 2ac6ce5..b2b5c2d 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >      struct iovec *iov;
> >      unsigned int iov_cnt;
> >  
> > -    while (virtqueue_pop(vq, &elem)) {
> > +    while (virtqueue_pop_nomap(vq, &elem)) {
> >          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
> >              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
> >              error_report("virtio-net ctrl missing headers");
> > @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >  
> >          iov = elem.out_sg;
> >          iov_cnt = elem.out_num;
> > -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> > +
> > +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> > +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> > +
> > +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> 
> Does this really work?

It seems to work here, as in it doesn't crash QEMU and I am able to boot
a guest with network. I didn't try any MAC related commands.


> The code in fact skips the location that contains
> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
> iov_discard_front().
>
> How about copy iov to a temp variable and use it in this function?

That would only work if I moved the cpu_physical_memory_unmap call
outside of virtqueue_fill, so that we can pass different iov to them.
We need to unmap the same iov that was previously mapped by
virtqueue_pop.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-25  6:16           ` [Qemu-devel] [Xen-devel] " Jason Wang
@ 2014-11-25 13:53             ` Stefano Stabellini
  2014-11-25 13:53             ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
  1 sibling, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-25 13:53 UTC (permalink / raw)
  To: Jason Wang
  Cc: Wen Congyang, mst, Stefano Stabellini, qemu-devel, xen devel,
	Fabio Fantoni, aliguori, anthony PERARD, Paolo Bonzini

On Tue, 25 Nov 2014, Jason Wang wrote:
> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> > On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >>> CC'ing Paolo.
> >>>
> >>>
> >>> Wen,
> >>> thanks for the logs.
> >>>
> >>> I investigated a little bit and it seems to me that the bug occurs when
> >>> QEMU tries to unmap only a portion of a memory region previously mapped.
> >>> That doesn't work with xen-mapcache.
> >>>
> >>> See these logs for example:
> >>>
> >>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> >>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> >> Sorry the logs don't quite match, it was supposed to be:
> >>
> >> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> >> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> > It looks like the problem is caused by iov_discard_front, called by
> > virtio_net_handle_ctrl. By changing iov_base after the sg has already
> > been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> > because the corresponding cpu_physical_memory_unmap will only unmap a
> > portion of the original sg.  On Xen the problem is worse because
> > xen-mapcache aborts.
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 2ac6ce5..b2b5c2d 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >      struct iovec *iov;
> >      unsigned int iov_cnt;
> >  
> > -    while (virtqueue_pop(vq, &elem)) {
> > +    while (virtqueue_pop_nomap(vq, &elem)) {
> >          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
> >              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
> >              error_report("virtio-net ctrl missing headers");
> > @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >  
> >          iov = elem.out_sg;
> >          iov_cnt = elem.out_num;
> > -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> > +
> > +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> > +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> > +
> > +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> 
> Does this really work?

It seems to work here, as in it doesn't crash QEMU and I am able to boot
a guest with network. I didn't try any MAC related commands.


> The code in fact skips the location that contains
> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
> iov_discard_front().
>
> How about copy iov to a temp variable and use it in this function?

That would only work if I moved the cpu_physical_memory_unmap call
outside of virtqueue_fill, so that we can pass different iov to them.
We need to unmap the same iov that was previously mapped by
virtqueue_pop.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-25 13:53             ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
  2014-11-26  5:23               ` Jason Wang
@ 2014-11-26  5:23               ` Jason Wang
  2014-11-26 10:53                   ` Stefano Stabellini
  1 sibling, 1 reply; 27+ messages in thread
From: Jason Wang @ 2014-11-26  5:23 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: mst, qemu-devel, xen devel, Fabio Fantoni, aliguori,
	anthony PERARD, Paolo Bonzini

On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
> On Tue, 25 Nov 2014, Jason Wang wrote:
>> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>> CC'ing Paolo.
>>>>>
>>>>>
>>>>> Wen,
>>>>> thanks for the logs.
>>>>>
>>>>> I investigated a little bit and it seems to me that the bug occurs when
>>>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>>>> That doesn't work with xen-mapcache.
>>>>>
>>>>> See these logs for example:
>>>>>
>>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>> Sorry the logs don't quite match, it was supposed to be:
>>>>
>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>> It looks like the problem is caused by iov_discard_front, called by
>>> virtio_net_handle_ctrl. By changing iov_base after the sg has already
>>> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
>>> because the corresponding cpu_physical_memory_unmap will only unmap a
>>> portion of the original sg.  On Xen the problem is worse because
>>> xen-mapcache aborts.
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index 2ac6ce5..b2b5c2d 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>      struct iovec *iov;
>>>      unsigned int iov_cnt;
>>>  
>>> -    while (virtqueue_pop(vq, &elem)) {
>>> +    while (virtqueue_pop_nomap(vq, &elem)) {
>>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>>>              error_report("virtio-net ctrl missing headers");
>>> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>  
>>>          iov = elem.out_sg;
>>>          iov_cnt = elem.out_num;
>>> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>>> +
>>> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
>>> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
>>> +
>>> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>> Does this really work?
> It seems to work here, as in it doesn't crash QEMU and I am able to boot
> a guest with network. I didn't try any MAC related commands.
>

It was because the guest (not a recent kernel?) never issue commands
through control vq.

We'd better hide the implementation details such as virtqueue_map_sg()
in virtio core instead of letting device call it directly.
>> The code in fact skips the location that contains
>> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
>> iov_discard_front().
>>
>> How about copy iov to a temp variable and use it in this function?
> That would only work if I moved the cpu_physical_memory_unmap call
> outside of virtqueue_fill, so that we can pass different iov to them.
> We need to unmap the same iov that was previously mapped by
> virtqueue_pop.
>

I mean something like following or just passing the offset of iov to
virtio_net_handle_*().

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9b88775..fdb4edd 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -798,7 +798,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
     VirtQueueElement elem;
     size_t s;
-    struct iovec *iov;
+    struct iovec *iov, *iov2;
     unsigned int iov_cnt;
 
     while (virtqueue_pop(vq, &elem)) {
@@ -808,8 +808,12 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
             exit(1);
         }
 
-        iov = elem.out_sg;
         iov_cnt = elem.out_num;
+        s = sizeof(struct iovec) * elem.out_num;
+        iov = g_malloc(s);
+        memcpy(iov, elem.out_sg, s);
+        iov2 = iov;
+
         s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
         if (s != sizeof(ctrl)) {
@@ -833,6 +837,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
 
         virtqueue_push(vq, &elem, sizeof(status));
         virtio_notify(vdev, vq);
+        g_free(iov2);
     }
 }

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-25 13:53             ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
@ 2014-11-26  5:23               ` Jason Wang
  2014-11-26  5:23               ` [Qemu-devel] [Xen-devel] " Jason Wang
  1 sibling, 0 replies; 27+ messages in thread
From: Jason Wang @ 2014-11-26  5:23 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Wen Congyang, mst, qemu-devel, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini

On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
> On Tue, 25 Nov 2014, Jason Wang wrote:
>> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>> CC'ing Paolo.
>>>>>
>>>>>
>>>>> Wen,
>>>>> thanks for the logs.
>>>>>
>>>>> I investigated a little bit and it seems to me that the bug occurs when
>>>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>>>> That doesn't work with xen-mapcache.
>>>>>
>>>>> See these logs for example:
>>>>>
>>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>> Sorry the logs don't quite match, it was supposed to be:
>>>>
>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>> It looks like the problem is caused by iov_discard_front, called by
>>> virtio_net_handle_ctrl. By changing iov_base after the sg has already
>>> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
>>> because the corresponding cpu_physical_memory_unmap will only unmap a
>>> portion of the original sg.  On Xen the problem is worse because
>>> xen-mapcache aborts.
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index 2ac6ce5..b2b5c2d 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>      struct iovec *iov;
>>>      unsigned int iov_cnt;
>>>  
>>> -    while (virtqueue_pop(vq, &elem)) {
>>> +    while (virtqueue_pop_nomap(vq, &elem)) {
>>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>>>              error_report("virtio-net ctrl missing headers");
>>> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>  
>>>          iov = elem.out_sg;
>>>          iov_cnt = elem.out_num;
>>> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>>> +
>>> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
>>> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
>>> +
>>> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>> Does this really work?
> It seems to work here, as in it doesn't crash QEMU and I am able to boot
> a guest with network. I didn't try any MAC related commands.
>

It was because the guest (not a recent kernel?) never issue commands
through control vq.

We'd better hide the implementation details such as virtqueue_map_sg()
in virtio core instead of letting device call it directly.
>> The code in fact skips the location that contains
>> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
>> iov_discard_front().
>>
>> How about copy iov to a temp variable and use it in this function?
> That would only work if I moved the cpu_physical_memory_unmap call
> outside of virtqueue_fill, so that we can pass different iov to them.
> We need to unmap the same iov that was previously mapped by
> virtqueue_pop.
>

I mean something like following or just passing the offset of iov to
virtio_net_handle_*().

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9b88775..fdb4edd 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -798,7 +798,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
     VirtQueueElement elem;
     size_t s;
-    struct iovec *iov;
+    struct iovec *iov, *iov2;
     unsigned int iov_cnt;
 
     while (virtqueue_pop(vq, &elem)) {
@@ -808,8 +808,12 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
             exit(1);
         }
 
-        iov = elem.out_sg;
         iov_cnt = elem.out_num;
+        s = sizeof(struct iovec) * elem.out_num;
+        iov = g_malloc(s);
+        memcpy(iov, elem.out_sg, s);
+        iov2 = iov;
+
         s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
         if (s != sizeof(ctrl)) {
@@ -833,6 +837,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
 
         virtqueue_push(vq, &elem, sizeof(status));
         virtio_notify(vdev, vq);
+        g_free(iov2);
     }
 }

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-26  5:23               ` [Qemu-devel] [Xen-devel] " Jason Wang
@ 2014-11-26 10:53                   ` Stefano Stabellini
  0 siblings, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-26 10:53 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, Stefano Stabellini, qemu-devel, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini

On Wed, 26 Nov 2014, Jason Wang wrote:
> On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
> > On Tue, 25 Nov 2014, Jason Wang wrote:
> >> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> >>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >>>>> CC'ing Paolo.
> >>>>>
> >>>>>
> >>>>> Wen,
> >>>>> thanks for the logs.
> >>>>>
> >>>>> I investigated a little bit and it seems to me that the bug occurs when
> >>>>> QEMU tries to unmap only a portion of a memory region previously mapped.
> >>>>> That doesn't work with xen-mapcache.
> >>>>>
> >>>>> See these logs for example:
> >>>>>
> >>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> >>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> >>>> Sorry the logs don't quite match, it was supposed to be:
> >>>>
> >>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> >>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> >>> It looks like the problem is caused by iov_discard_front, called by
> >>> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> >>> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> >>> because the corresponding cpu_physical_memory_unmap will only unmap a
> >>> portion of the original sg.  On Xen the problem is worse because
> >>> xen-mapcache aborts.
> >>>
> >>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >>> index 2ac6ce5..b2b5c2d 100644
> >>> --- a/hw/net/virtio-net.c
> >>> +++ b/hw/net/virtio-net.c
> >>> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >>>      struct iovec *iov;
> >>>      unsigned int iov_cnt;
> >>>  
> >>> -    while (virtqueue_pop(vq, &elem)) {
> >>> +    while (virtqueue_pop_nomap(vq, &elem)) {
> >>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
> >>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
> >>>              error_report("virtio-net ctrl missing headers");
> >>> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >>>  
> >>>          iov = elem.out_sg;
> >>>          iov_cnt = elem.out_num;
> >>> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> >>> +
> >>> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> >>> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> >>> +
> >>> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >> Does this really work?
> > It seems to work here, as in it doesn't crash QEMU and I am able to boot
> > a guest with network. I didn't try any MAC related commands.
> >
> 
> It was because the guest (not a recent kernel?) never issue commands
> through control vq.
> 
> We'd better hide the implementation details such as virtqueue_map_sg()
> in virtio core instead of letting device call it directly.
> >> The code in fact skips the location that contains
> >> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
> >> iov_discard_front().
> >>
> >> How about copy iov to a temp variable and use it in this function?
> > That would only work if I moved the cpu_physical_memory_unmap call
> > outside of virtqueue_fill, so that we can pass different iov to them.
> > We need to unmap the same iov that was previously mapped by
> > virtqueue_pop.
> >
> 
> I mean something like following or just passing the offset of iov to
> virtio_net_handle_*().

Sorry, you are right, your patch works too. I tried something like this
yesterday but I was confused because even if a crash doesn't happen
anymore, virtio-net still doesn't work on Xen (it boots but the network
doesn't work properly within the guest).
But that seems to be a separate issue and it affects my series too.

A possible problem with this approach is that virtqueue_push is now
called passing the original iov, not the shortened one.

Are you sure that is OK?
If so we can drop my series and use this instead.


> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 9b88775..fdb4edd 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -798,7 +798,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
> *vdev, VirtQueue *vq)
>      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
>      VirtQueueElement elem;
>      size_t s;
> -    struct iovec *iov;
> +    struct iovec *iov, *iov2;
>      unsigned int iov_cnt;
>  
>      while (virtqueue_pop(vq, &elem)) {
> @@ -808,8 +808,12 @@ static void virtio_net_handle_ctrl(VirtIODevice
> *vdev, VirtQueue *vq)
>              exit(1);
>          }
>  
> -        iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> +        s = sizeof(struct iovec) * elem.out_num;
> +        iov = g_malloc(s);
> +        memcpy(iov, elem.out_sg, s);
> +        iov2 = iov;
> +
>          s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>          if (s != sizeof(ctrl)) {
> @@ -833,6 +837,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
> *vdev, VirtQueue *vq)
>  
>          virtqueue_push(vq, &elem, sizeof(status));
>          virtio_notify(vdev, vq);
> +        g_free(iov2);
>      }
>  }
> 
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
@ 2014-11-26 10:53                   ` Stefano Stabellini
  0 siblings, 0 replies; 27+ messages in thread
From: Stefano Stabellini @ 2014-11-26 10:53 UTC (permalink / raw)
  To: Jason Wang
  Cc: Wen Congyang, mst, Stefano Stabellini, qemu-devel, xen devel,
	Fabio Fantoni, aliguori, anthony PERARD, Paolo Bonzini

On Wed, 26 Nov 2014, Jason Wang wrote:
> On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
> > On Tue, 25 Nov 2014, Jason Wang wrote:
> >> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
> >>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
> >>>>> CC'ing Paolo.
> >>>>>
> >>>>>
> >>>>> Wen,
> >>>>> thanks for the logs.
> >>>>>
> >>>>> I investigated a little bit and it seems to me that the bug occurs when
> >>>>> QEMU tries to unmap only a portion of a memory region previously mapped.
> >>>>> That doesn't work with xen-mapcache.
> >>>>>
> >>>>> See these logs for example:
> >>>>>
> >>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
> >>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> >>>> Sorry the logs don't quite match, it was supposed to be:
> >>>>
> >>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
> >>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
> >>> It looks like the problem is caused by iov_discard_front, called by
> >>> virtio_net_handle_ctrl. By changing iov_base after the sg has already
> >>> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
> >>> because the corresponding cpu_physical_memory_unmap will only unmap a
> >>> portion of the original sg.  On Xen the problem is worse because
> >>> xen-mapcache aborts.
> >>>
> >>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >>> index 2ac6ce5..b2b5c2d 100644
> >>> --- a/hw/net/virtio-net.c
> >>> +++ b/hw/net/virtio-net.c
> >>> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >>>      struct iovec *iov;
> >>>      unsigned int iov_cnt;
> >>>  
> >>> -    while (virtqueue_pop(vq, &elem)) {
> >>> +    while (virtqueue_pop_nomap(vq, &elem)) {
> >>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
> >>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
> >>>              error_report("virtio-net ctrl missing headers");
> >>> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> >>>  
> >>>          iov = elem.out_sg;
> >>>          iov_cnt = elem.out_num;
> >>> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
> >>> +
> >>> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
> >>> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
> >>> +
> >>> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
> >> Does this really work?
> > It seems to work here, as in it doesn't crash QEMU and I am able to boot
> > a guest with network. I didn't try any MAC related commands.
> >
> 
> It was because the guest (not a recent kernel?) never issue commands
> through control vq.
> 
> We'd better hide the implementation details such as virtqueue_map_sg()
> in virtio core instead of letting device call it directly.
> >> The code in fact skips the location that contains
> >> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
> >> iov_discard_front().
> >>
> >> How about copy iov to a temp variable and use it in this function?
> > That would only work if I moved the cpu_physical_memory_unmap call
> > outside of virtqueue_fill, so that we can pass different iov to them.
> > We need to unmap the same iov that was previously mapped by
> > virtqueue_pop.
> >
> 
> I mean something like following or just passing the offset of iov to
> virtio_net_handle_*().

Sorry, you are right, your patch works too. I tried something like this
yesterday but I was confused because even if a crash doesn't happen
anymore, virtio-net still doesn't work on Xen (it boots but the network
doesn't work properly within the guest).
But that seems to be a separate issue and it affects my series too.

A possible problem with this approach is that virtqueue_push is now
called passing the original iov, not the shortened one.

Are you sure that is OK?
If so we can drop my series and use this instead.


> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 9b88775..fdb4edd 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -798,7 +798,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
> *vdev, VirtQueue *vq)
>      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
>      VirtQueueElement elem;
>      size_t s;
> -    struct iovec *iov;
> +    struct iovec *iov, *iov2;
>      unsigned int iov_cnt;
>  
>      while (virtqueue_pop(vq, &elem)) {
> @@ -808,8 +808,12 @@ static void virtio_net_handle_ctrl(VirtIODevice
> *vdev, VirtQueue *vq)
>              exit(1);
>          }
>  
> -        iov = elem.out_sg;
>          iov_cnt = elem.out_num;
> +        s = sizeof(struct iovec) * elem.out_num;
> +        iov = g_malloc(s);
> +        memcpy(iov, elem.out_sg, s);
> +        iov2 = iov;
> +
>          s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>          if (s != sizeof(ctrl)) {
> @@ -833,6 +837,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
> *vdev, VirtQueue *vq)
>  
>          virtqueue_push(vq, &elem, sizeof(status));
>          virtio_notify(vdev, vq);
> +        g_free(iov2);
>      }
>  }
> 
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-26 10:53                   ` Stefano Stabellini
  (?)
  (?)
@ 2014-11-27  5:01                   ` Jason Wang
  -1 siblings, 0 replies; 27+ messages in thread
From: Jason Wang @ 2014-11-27  5:01 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: mst, qemu-devel, xen devel, Fabio Fantoni, aliguori,
	anthony PERARD, Paolo Bonzini



On 11/26/2014 06:53 PM, Stefano Stabellini wrote:
> On Wed, 26 Nov 2014, Jason Wang wrote:
>> >On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
>>> > >On Tue, 25 Nov 2014, Jason Wang wrote:
>>>> > >>On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
>>>>> > >>>On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>>> > >>>>On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>>>> > >>>>>CC'ing Paolo.
>>>>>>> > >>>>>
>>>>>>> > >>>>>
>>>>>>> > >>>>>Wen,
>>>>>>> > >>>>>thanks for the logs.
>>>>>>> > >>>>>
>>>>>>> > >>>>>I investigated a little bit and it seems to me that the bug occurs when
>>>>>>> > >>>>>QEMU tries to unmap only a portion of a memory region previously mapped.
>>>>>>> > >>>>>That doesn't work with xen-mapcache.
>>>>>>> > >>>>>
>>>>>>> > >>>>>See these logs for example:
>>>>>>> > >>>>>
>>>>>>> > >>>>>DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>>>>>> > >>>>>DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>>>> > >>>>Sorry the logs don't quite match, it was supposed to be:
>>>>>> > >>>>
>>>>>> > >>>>DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>>>>>> > >>>>DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>>> > >>>It looks like the problem is caused by iov_discard_front, called by
>>>>> > >>>virtio_net_handle_ctrl. By changing iov_base after the sg has already
>>>>> > >>>been mapped (cpu_physical_memory_map), it causes a leak in the mapping
>>>>> > >>>because the corresponding cpu_physical_memory_unmap will only unmap a
>>>>> > >>>portion of the original sg.  On Xen the problem is worse because
>>>>> > >>>xen-mapcache aborts.
>>>>> > >>>
>>>>> > >>>diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>> > >>>index 2ac6ce5..b2b5c2d 100644
>>>>> > >>>--- a/hw/net/virtio-net.c
>>>>> > >>>+++ b/hw/net/virtio-net.c
>>>>> > >>>@@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>>> > >>>      struct iovec *iov;
>>>>> > >>>      unsigned int iov_cnt;
>>>>> > >>>
>>>>> > >>>-    while (virtqueue_pop(vq, &elem)) {
>>>>> > >>>+    while (virtqueue_pop_nomap(vq, &elem)) {
>>>>> > >>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>>>>> > >>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>>>>> > >>>              error_report("virtio-net ctrl missing headers");
>>>>> > >>>@@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>>> > >>>
>>>>> > >>>          iov = elem.out_sg;
>>>>> > >>>          iov_cnt = elem.out_num;
>>>>> > >>>-        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>>> > >>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>>>>> > >>>+
>>>>> > >>>+        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
>>>>> > >>>+        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
>>>>> > >>>+
>>>>> > >>>+        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>> > >>Does this really work?
>>> > >It seems to work here, as in it doesn't crash QEMU and I am able to boot
>>> > >a guest with network. I didn't try any MAC related commands.
>>> > >
>> >
>> >It was because the guest (not a recent kernel?) never issue commands
>> >through control vq.
>> >
>> >We'd better hide the implementation details such as virtqueue_map_sg()
>> >in virtio core instead of letting device call it directly.
>>>> > >>The code in fact skips the location that contains
>>>> > >>virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
>>>> > >>iov_discard_front().
>>>> > >>
>>>> > >>How about copy iov to a temp variable and use it in this function?
>>> > >That would only work if I moved the cpu_physical_memory_unmap call
>>> > >outside of virtqueue_fill, so that we can pass different iov to them.
>>> > >We need to unmap the same iov that was previously mapped by
>>> > >virtqueue_pop.
>>> > >
>> >
>> >I mean something like following or just passing the offset of iov to
>> >virtio_net_handle_*().
> Sorry, you are right, your patch works too. I tried something like this
> yesterday but I was confused because even if a crash doesn't happen
> anymore, virtio-net still doesn't work on Xen (it boots but the network
> doesn't work properly within the guest).
> But that seems to be a separate issue and it affects my series too.
>
> A possible problem with this approach is that virtqueue_push is now
> called passing the original iov, not the shortened one.
>
> Are you sure that is OK?

It's ok, except for unmapping, virtqueue_push does not care iov at all.
> If so we can drop my series and use this instead.
>

I will submit a formal patch for this.

Thanks

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)
  2014-11-26 10:53                   ` Stefano Stabellini
  (?)
@ 2014-11-27  5:01                   ` Jason Wang
  -1 siblings, 0 replies; 27+ messages in thread
From: Jason Wang @ 2014-11-27  5:01 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Wen Congyang, mst, qemu-devel, xen devel, Fabio Fantoni,
	aliguori, anthony PERARD, Paolo Bonzini



On 11/26/2014 06:53 PM, Stefano Stabellini wrote:
> On Wed, 26 Nov 2014, Jason Wang wrote:
>> >On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
>>> > >On Tue, 25 Nov 2014, Jason Wang wrote:
>>>> > >>On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
>>>>> > >>>On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>>> > >>>>On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>>>> > >>>>>CC'ing Paolo.
>>>>>>> > >>>>>
>>>>>>> > >>>>>
>>>>>>> > >>>>>Wen,
>>>>>>> > >>>>>thanks for the logs.
>>>>>>> > >>>>>
>>>>>>> > >>>>>I investigated a little bit and it seems to me that the bug occurs when
>>>>>>> > >>>>>QEMU tries to unmap only a portion of a memory region previously mapped.
>>>>>>> > >>>>>That doesn't work with xen-mapcache.
>>>>>>> > >>>>>
>>>>>>> > >>>>>See these logs for example:
>>>>>>> > >>>>>
>>>>>>> > >>>>>DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>>>>>> > >>>>>DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>>>> > >>>>Sorry the logs don't quite match, it was supposed to be:
>>>>>> > >>>>
>>>>>> > >>>>DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>>>>>> > >>>>DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>>> > >>>It looks like the problem is caused by iov_discard_front, called by
>>>>> > >>>virtio_net_handle_ctrl. By changing iov_base after the sg has already
>>>>> > >>>been mapped (cpu_physical_memory_map), it causes a leak in the mapping
>>>>> > >>>because the corresponding cpu_physical_memory_unmap will only unmap a
>>>>> > >>>portion of the original sg.  On Xen the problem is worse because
>>>>> > >>>xen-mapcache aborts.
>>>>> > >>>
>>>>> > >>>diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>> > >>>index 2ac6ce5..b2b5c2d 100644
>>>>> > >>>--- a/hw/net/virtio-net.c
>>>>> > >>>+++ b/hw/net/virtio-net.c
>>>>> > >>>@@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>>> > >>>      struct iovec *iov;
>>>>> > >>>      unsigned int iov_cnt;
>>>>> > >>>
>>>>> > >>>-    while (virtqueue_pop(vq, &elem)) {
>>>>> > >>>+    while (virtqueue_pop_nomap(vq, &elem)) {
>>>>> > >>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>>>>> > >>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>>>>> > >>>              error_report("virtio-net ctrl missing headers");
>>>>> > >>>@@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>>>>> > >>>
>>>>> > >>>          iov = elem.out_sg;
>>>>> > >>>          iov_cnt = elem.out_num;
>>>>> > >>>-        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>>> > >>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>>>>> > >>>+
>>>>> > >>>+        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
>>>>> > >>>+        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
>>>>> > >>>+
>>>>> > >>>+        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>> > >>Does this really work?
>>> > >It seems to work here, as in it doesn't crash QEMU and I am able to boot
>>> > >a guest with network. I didn't try any MAC related commands.
>>> > >
>> >
>> >It was because the guest (not a recent kernel?) never issue commands
>> >through control vq.
>> >
>> >We'd better hide the implementation details such as virtqueue_map_sg()
>> >in virtio core instead of letting device call it directly.
>>>> > >>The code in fact skips the location that contains
>>>> > >>virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
>>>> > >>iov_discard_front().
>>>> > >>
>>>> > >>How about copy iov to a temp variable and use it in this function?
>>> > >That would only work if I moved the cpu_physical_memory_unmap call
>>> > >outside of virtqueue_fill, so that we can pass different iov to them.
>>> > >We need to unmap the same iov that was previously mapped by
>>> > >virtqueue_pop.
>>> > >
>> >
>> >I mean something like following or just passing the offset of iov to
>> >virtio_net_handle_*().
> Sorry, you are right, your patch works too. I tried something like this
> yesterday but I was confused because even if a crash doesn't happen
> anymore, virtio-net still doesn't work on Xen (it boots but the network
> doesn't work properly within the guest).
> But that seems to be a separate issue and it affects my series too.
>
> A possible problem with this approach is that virtqueue_push is now
> called passing the original iov, not the shortened one.
>
> Are you sure that is OK?

It's ok, except for unmapping, virtqueue_push does not care iov at all.
> If so we can drop my series and use this instead.
>

I will submit a formal patch for this.

Thanks

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2014-11-27  5:02 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-24  1:58 virtio on Xen cannot work Wen Congyang
2014-11-24  8:52 ` qemu crash with virtio on Xen domUs (backtrace included) Fabio Fantoni
2014-11-24  8:52 ` [Qemu-devel] " Fabio Fantoni
2014-11-24  9:25   ` Wen Congyang
2014-11-24  9:25   ` [Qemu-devel] " Wen Congyang
2014-11-24 15:23     ` Stefano Stabellini
2014-11-24 15:23     ` [Qemu-devel] " Stefano Stabellini
2014-11-24 17:32       ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
2014-11-24 18:44         ` [Qemu-devel] virtio leaks cpu mappings, was: " Stefano Stabellini
2014-11-24 18:52           ` [Qemu-devel] [Xen-devel] " Konrad Rzeszutek Wilk
2014-11-24 19:01             ` Stefano Stabellini
2014-11-24 19:01             ` Stefano Stabellini
2014-11-24 18:52           ` Konrad Rzeszutek Wilk
2014-11-25  1:32           ` Wen Congyang
2014-11-25  1:32           ` [Qemu-devel] " Wen Congyang
2014-11-25  6:16           ` Jason Wang
2014-11-25  6:16           ` [Qemu-devel] [Xen-devel] " Jason Wang
2014-11-25 13:53             ` Stefano Stabellini
2014-11-25 13:53             ` [Qemu-devel] [Xen-devel] " Stefano Stabellini
2014-11-26  5:23               ` Jason Wang
2014-11-26  5:23               ` [Qemu-devel] [Xen-devel] " Jason Wang
2014-11-26 10:53                 ` Stefano Stabellini
2014-11-26 10:53                   ` Stefano Stabellini
2014-11-27  5:01                   ` Jason Wang
2014-11-27  5:01                   ` [Qemu-devel] [Xen-devel] " Jason Wang
2014-11-24 18:44         ` Stefano Stabellini
2014-11-24 17:32       ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.