All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] modern virtio on HVF
@ 2018-10-16 15:27 Roman Bolshakov
  2018-10-17  9:47 ` Stefan Hajnoczi
  0 siblings, 1 reply; 3+ messages in thread
From: Roman Bolshakov @ 2018-10-16 15:27 UTC (permalink / raw)
  To: qemu-devel

Hello dear subscribers,

I'm running Linux in QEMU on macOS with hvf accel enabled and having an
issue that is very similar to the KVM bug in nested KVM environments,
where KVM is run under another hypervisor:
https://bugs.launchpad.net/qemu/+bug/1636217


The symptomps are the same as in the bug above. udev hangs unless:
* -machine type=pc-i440fx-X, where X <=2.6 is used
* -accel tcg is used
* -global virtio-pci.disable-modern=on is specified

The issue was briefly noted on packer mailing list:
https://groups.google.com/forum/#!topic/packer-tool/je2D0LRhWj0

If I send Magic SysRQ-t to the VM, I can notice virtio_pci hangs
indefinetly in vp_reset:
[   48.604482] systemd-udevd   D    0   121    106 0x00000100
[   48.608093] Call Trace:
[   48.609701]  ? __schedule+0x292/0x880
[   48.612076]  schedule+0x32/0x80
[   48.614189]  schedule_timeout+0x15e/0x300
[   48.616840]  ? call_timer_fn+0x140/0x140
[   48.619375]  msleep+0x2a/0x40
[   48.621284]  vp_reset+0x27/0x50 [virtio_pci]
[   48.624185]  register_virtio_device+0x71/0x100 [virtio]
[   48.627689]  virtio_pci_probe+0xad/0x120 [virtio_pci]
[   48.630825]  local_pci_probe+0x44/0xa0
[   48.633357]  pci_device_probe+0x127/0x140
[   48.636085]  driver_probe_device+0x297/0x450
[   48.638876]  __driver_attach+0xd9/0xe0
[   48.641484]  ? driver_probe_device+0x450/0x450
[   48.644393]  bus_for_each_dev+0x5a/0x90
[   48.646879]  bus_add_driver+0x41/0x260
[   48.649279]  driver_register+0x5b/0xd0
[   48.651703]  ? 0xffffffffc00ac000
[   48.653994]  do_one_initcall+0x50/0x1b0
[   48.656496]  do_init_module+0x5a/0x1fa
[   48.659001]  load_module+0x1557/0x1ed0
[   48.661507]  ? m_show+0x1b0/0x1b0
[   48.663725]  ? security_capable+0x47/0x60
[   48.666435]  SYSC_finit_module+0x80/0xb0
[   48.669021]  do_syscall_64+0x74/0x150
[   48.671222]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[   48.674565] RIP: 0033:0x7f71dac73139
[   48.677050] RSP: 002b:00007ffdcfd35058 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[   48.681997] RAX: ffffffffffffffda RBX: 000055d25bd01500 RCX: 00007f71dac73139
[   48.686608] RDX: 0000000000000000 RSI: 00007f71db5af83d RDI: 000000000000000f
[   48.691191] RBP: 00007f71db5af83d R08: 0000000000000000 R09: 0000000000000000
[   48.695750] R10: 000000000000000f R11: 0000000000000246 R12: 0000000000020000
[   48.700568] R13: 000055d25bd039b0 R14: 0000000000000000 R15: 0000000003938700

It looks like virtio backend doesn't return 0 device status after
vp_iowrite8 and vp_reset blocks udev:
        while (vp_ioread8(&vp_dev->common->device_status))
                        msleep(1)

What could be the cause of the issue?
Any advices how to triage it are appreciated.

Thank you,
Roman

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] modern virtio on HVF
  2018-10-16 15:27 [Qemu-devel] modern virtio on HVF Roman Bolshakov
@ 2018-10-17  9:47 ` Stefan Hajnoczi
  2018-10-17 16:05   ` Roman Bolshakov
  0 siblings, 1 reply; 3+ messages in thread
From: Stefan Hajnoczi @ 2018-10-17  9:47 UTC (permalink / raw)
  To: Roman Bolshakov; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 3358 bytes --]

On Tue, Oct 16, 2018 at 06:27:12PM +0300, Roman Bolshakov wrote:
> Hello dear subscribers,
> 
> I'm running Linux in QEMU on macOS with hvf accel enabled and having an
> issue that is very similar to the KVM bug in nested KVM environments,
> where KVM is run under another hypervisor:
> https://bugs.launchpad.net/qemu/+bug/1636217
> 
> 
> The symptomps are the same as in the bug above. udev hangs unless:
> * -machine type=pc-i440fx-X, where X <=2.6 is used
> * -accel tcg is used
> * -global virtio-pci.disable-modern=on is specified
> 
> The issue was briefly noted on packer mailing list:
> https://groups.google.com/forum/#!topic/packer-tool/je2D0LRhWj0
> 
> If I send Magic SysRQ-t to the VM, I can notice virtio_pci hangs
> indefinetly in vp_reset:
> [   48.604482] systemd-udevd   D    0   121    106 0x00000100
> [   48.608093] Call Trace:
> [   48.609701]  ? __schedule+0x292/0x880
> [   48.612076]  schedule+0x32/0x80
> [   48.614189]  schedule_timeout+0x15e/0x300
> [   48.616840]  ? call_timer_fn+0x140/0x140
> [   48.619375]  msleep+0x2a/0x40
> [   48.621284]  vp_reset+0x27/0x50 [virtio_pci]
> [   48.624185]  register_virtio_device+0x71/0x100 [virtio]
> [   48.627689]  virtio_pci_probe+0xad/0x120 [virtio_pci]
> [   48.630825]  local_pci_probe+0x44/0xa0
> [   48.633357]  pci_device_probe+0x127/0x140
> [   48.636085]  driver_probe_device+0x297/0x450
> [   48.638876]  __driver_attach+0xd9/0xe0
> [   48.641484]  ? driver_probe_device+0x450/0x450
> [   48.644393]  bus_for_each_dev+0x5a/0x90
> [   48.646879]  bus_add_driver+0x41/0x260
> [   48.649279]  driver_register+0x5b/0xd0
> [   48.651703]  ? 0xffffffffc00ac000
> [   48.653994]  do_one_initcall+0x50/0x1b0
> [   48.656496]  do_init_module+0x5a/0x1fa
> [   48.659001]  load_module+0x1557/0x1ed0
> [   48.661507]  ? m_show+0x1b0/0x1b0
> [   48.663725]  ? security_capable+0x47/0x60
> [   48.666435]  SYSC_finit_module+0x80/0xb0
> [   48.669021]  do_syscall_64+0x74/0x150
> [   48.671222]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
> [   48.674565] RIP: 0033:0x7f71dac73139
> [   48.677050] RSP: 002b:00007ffdcfd35058 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
> [   48.681997] RAX: ffffffffffffffda RBX: 000055d25bd01500 RCX: 00007f71dac73139
> [   48.686608] RDX: 0000000000000000 RSI: 00007f71db5af83d RDI: 000000000000000f
> [   48.691191] RBP: 00007f71db5af83d R08: 0000000000000000 R09: 0000000000000000
> [   48.695750] R10: 000000000000000f R11: 0000000000000246 R12: 0000000000020000
> [   48.700568] R13: 000055d25bd039b0 R14: 0000000000000000 R15: 0000000003938700
> 
> It looks like virtio backend doesn't return 0 device status after
> vp_iowrite8 and vp_reset blocks udev:
>         while (vp_ioread8(&vp_dev->common->device_status))
>                         msleep(1)
> 
> What could be the cause of the issue?
> Any advices how to triage it are appreciated.

I wonder what happened in virtio_pci_probe() ->
virtio_pci_modern_probe().  For example, were the BARs properly set up?

For starters you can debug the QEMU process to check if
virtio_pci_common_read/write() get called for this device (disable all
other virtio devices to make life easy).  If these functions aren't
being called then the guest either got the address wrong or dispatch
isn't working for some other reason (hvf?).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] modern virtio on HVF
  2018-10-17  9:47 ` Stefan Hajnoczi
@ 2018-10-17 16:05   ` Roman Bolshakov
  0 siblings, 0 replies; 3+ messages in thread
From: Roman Bolshakov @ 2018-10-17 16:05 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

On Wed, Oct 17, 2018 at 10:47:40AM +0100, Stefan Hajnoczi wrote:
> On Tue, Oct 16, 2018 at 06:27:12PM +0300, Roman Bolshakov wrote:
> > 
> > It looks like virtio backend doesn't return 0 device status after
> > vp_iowrite8 and vp_reset blocks udev:
> >         while (vp_ioread8(&vp_dev->common->device_status))
> >                         msleep(1)
> > 
> > What could be the cause of the issue?
> > Any advices how to triage it are appreciated.
> 
> I wonder what happened in virtio_pci_probe() ->
> virtio_pci_modern_probe().  For example, were the BARs properly set up?
> 
> For starters you can debug the QEMU process to check if
> virtio_pci_common_read/write() get called for this device (disable all
> other virtio devices to make life easy).  If these functions aren't
> being called then the guest either got the address wrong or dispatch
> isn't working for some other reason (hvf?).
> 

Thank you Stefan. That explains why it doesn't quit:

virtio_pci_common_write: hwaddr 0x14 val 0x8 size 0x1
38552@1539791595.072718:virtio_set_status vdev 0x7f92d5ef4170 val 8
virtio_pci_common_read: hwaddr 0x14 val 0x8 size 0x1
virtio_pci_common_read: hwaddr 0x14 val 0x8 size 0x1
virtio_pci_common_read: hwaddr 0x14 val 0x8 size 0x1
virtio_pci_common_read: hwaddr 0x14 val 0x8 size 0x1
virtio_pci_common_read: hwaddr 0x14 val 0x8 size 0x1
virtio_pci_common_read: hwaddr 0x14 val 0x8 size 0x1


I executed QEMU a number of times, each time vpirtio_pci_common_write
writes different value but virtio_pci code clearly writes 0:
    /* 0 status means a reset. */
    vp_iowrite8(0, &vp_dev->common->device_status);

Maybe the value somehow got corrupted or it was taken from wrong location.

-Roman

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-10-17 16:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-16 15:27 [Qemu-devel] modern virtio on HVF Roman Bolshakov
2018-10-17  9:47 ` Stefan Hajnoczi
2018-10-17 16:05   ` Roman Bolshakov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.