All of lore.kernel.org
 help / color / mirror / Atom feed
* RBD attach via libvirt to kvm vm - VM kernel hang
@ 2012-03-28 13:27 Sławomir Skowron
  2012-03-28 16:13 ` Tommi Virtanen
  0 siblings, 1 reply; 5+ messages in thread
From: Sławomir Skowron @ 2012-03-28 13:27 UTC (permalink / raw)
  To: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 1710 bytes --]

Dom0 - Ubuntu oneiric, kernel 3.0.0-16-server.

ii  kvm
1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu10 dummy transitional package
from kvm to qemu-kvm
ii  qemu                                1.0+noroms-0ubuntu10
          dummy transitional package from qemu to qemu-kvm
ii  qemu-common                         1.0+noroms-0ubuntu10
          qemu common functionality (bios, documentation, etc)
ii  qemu-kvm                            1.0+noroms-0ubuntu10
          Full virtualization on i386 and amd64 hardware
ii  qemu-utils                          1.0+noroms-0ubuntu10
          qemu utilities
ii  libvirt-bin                         0.9.9~release1-2ubuntu6
          programs for the libvirt library
ii  libvirt0                            0.9.9~release1-2ubuntu6
          library for interfacing with different virtualization
systems
ii  python-libvirt                      0.9.9~release1-2ubuntu6
          libvirt Python bindings

<disk type="network" device="disk">
         <driver name="qemu" type="raw"/>
         <source protocol="rbd"
name="rbd/foo:id=admin:rbd_writeback_window=8000000">
             <host name="10.177.64.4" port="6789"/>
             <host name="10.177.64.6" port="6789"/>
             <host name="10.177.64.8" port="6789"/>
         </source>
         <target dev="vdd" bus="virtio"/>
</disk>

After:

virsh attach-device on-01 /tmp/rbd.xml
Device attached successfully

libvirt log is clean.

VM on-01 with config like in attachment (dumpxml) - hang, after
attaching rbd device with kernel_bug in attachment.

With compiled vanilia kvm 1.0, this bug exists, same as above ubuntu
kvm version.

-- 
-----
Regards

Sławek Skowron

[-- Attachment #2: kernel_bug.txt --]
[-- Type: text/plain, Size: 8451 bytes --]

[   68.630913] BUG: unable to handle kernel NULL pointer dereference at 0000000000000049
[   68.632016] IP: [<ffffffff8130a6a5>] pci_find_capability+0x15/0x60
[   68.632016] PGD 1b537067 PUD 1afed067 PMD 0
[   68.632016] Oops: 0000 [#1] SMP
[   68.632016] CPU 0
[   68.632016] Modules linked in: bonding psmouse virtio_balloon serio_raw i2c_piix4 acpiphp lp parport floppy ixgbevf
[   68.632016]
[   68.632016] Pid: 17, comm: kworker/0:1 Not tainted 3.0.0-13-xen #22 Bochs Bochs
[   68.632016] RIP: 0010:[<ffffffff8130a6a5>]  [<ffffffff8130a6a5>] pci_find_capability+0x15/0x60
[   68.632016] RSP: 0000:ffff88001daa3b20  EFLAGS: 00010282
[   68.632016] RAX: 0000000000000000 RBX: ffff88001da67000 RCX: 00000000000000a4
[   68.632016] RDX: 0000000000000000 RSI: 0000000000000010 RDI: 0000000000000000
[   68.632016] RBP: ffff88001daa3b40 R08: 0000000000000002 R09: ffff88001daa3b1c
[   68.632016] R10: 0000000000000028 R11: 0000000000000000 R12: 0000000000000000
[   68.632016] R13: 0000000000000000 R14: 0000000000000000 R15: 00000000000000a8
[   68.632016] FS:  0000000000000000(0000) GS:ffff88001fc00000(0000) knlGS:0000000000000000
[   68.632016] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[   68.632016] CR2: 0000000000000049 CR3: 000000001ca7a000 CR4: 00000000000006f0
[   68.632016] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   68.632016] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   68.632016] Process kworker/0:1 (pid: 17, threadinfo ffff88001daa2000, task ffff88001da79720)
[   68.632016] Stack:
[   68.632016]  0000000000000282 ffff88001da67000 ffff88001da67000 0000000000000000
[   68.632016]  ffff88001daa3ba0 ffffffff8131a9f7 ffffffff81c50000 ffff88001da66000
[   68.632016]  ffffffff81c5dcc0 000000001da66000 ffff88001daa3ba0 ffff88001da67000
[   68.632016] Call Trace:
[   68.632016]  [<ffffffff8131a9f7>] pci_set_payload+0xa7/0x140
[   68.632016]  [<ffffffff8131ae38>] pci_configure_slot.part.6+0x18/0x100
[   68.632016]  [<ffffffff8131af52>] pci_configure_slot+0x32/0x40
[   68.632016]  [<ffffffffa003a7e8>] enable_device+0x188/0x9a0 [acpiphp]
[   68.632016]  [<ffffffff81307349>] ? pci_bus_read_config_dword+0x89/0xa0
[   68.632016]  [<ffffffff813485a4>] ? acpi_os_wait_events_complete+0x23/0x23
[   68.632016]  [<ffffffffa0039ed0>] acpiphp_enable_slot+0x80/0xb0 [acpiphp]
[   68.632016]  [<ffffffffa0039fc4>] acpiphp_check_bridge.isra.12+0x64/0xf0 [acpiphp]
[   68.632016]  [<ffffffffa003a213>] handle_hotplug_event_func+0x103/0x1b0 [acpiphp]
[   68.632016]  [<ffffffff8134b0ff>] ? acpi_bus_get_device+0x27/0x40
[   68.632016]  [<ffffffff81357db3>] acpi_ev_notify_dispatch+0x67/0x7e
[   68.632016]  [<ffffffff813485cb>] acpi_os_execute_deferred+0x27/0x34
[   68.632016]  [<ffffffff8107bafa>] process_one_work+0x11a/0x480
[   68.632016]  [<ffffffff8107c8a5>] worker_thread+0x165/0x370
[   68.632016]  [<ffffffff8107c740>] ? manage_workers.isra.30+0x130/0x130
[   68.632016]  [<ffffffff81080cec>] kthread+0x8c/0xa0
[   68.632016]  [<ffffffff81616824>] kernel_thread_helper+0x4/0x10
[   68.632016]  [<ffffffff81080c60>] ? flush_kthread_worker+0xa0/0xa0
[   68.632016]  [<ffffffff81616820>] ? gs_change+0x13/0x13
[   68.632016] Code: 45 cc 48 83 c4 20 5b 41 5c 41 5d 41 5e 5d c3 0f 1f 80 00 00 00 00 55 48 89 e5 48 83 ec 20 48 89 5d f0 4c 89 65 f8 66 66 66 66 90 <0f> b6 57 49 49 89 fc 89 f3 8b 77 38 48 8b 7f 10 e8 e6 f8 ff ff
[   68.632016] RIP  [<ffffffff8130a6a5>] pci_find_capability+0x15/0x60
[   68.632016]  RSP <ffff88001daa3b20>
[   68.632016] CR2: 0000000000000049
[   68.698931] ---[ end trace 4ea71c2b1410e496 ]---
[   68.700365] BUG: unable to handle kernel paging request at fffffffffffffff8
[   68.702051] IP: [<ffffffff81081181>] kthread_data+0x11/0x20
[   68.703592] PGD 1c05067 PUD 1c06067 PMD 0
[   68.704312] Oops: 0000 [#2] SMP
[   68.704312] CPU 0
[   68.704312] Modules linked in: bonding psmouse virtio_balloon serio_raw i2c_piix4 acpiphp lp parport floppy ixgbevf
[   68.704312]
[   68.704312] Pid: 17, comm: kworker/0:1 Tainted: G      D     3.0.0-13-xen #22 Bochs Bochs
[   68.704312] RIP: 0010:[<ffffffff81081181>]  [<ffffffff81081181>] kthread_data+0x11/0x20
[   68.704312] RSP: 0000:ffff88001daa3760  EFLAGS: 00010092
[   68.704312] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[   68.704312] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88001da79720
[   68.704312] RBP: ffff88001daa3778 R08: 0000000000989680 R09: 0000000000000001
[   68.704312] R10: 0000000000001400 R11: ffff88001da74598 R12: 0000000000000000
[   68.704312] R13: ffff88001da79ae8 R14: 0000000000000000 R15: 0000000000000246
[   68.704312] FS:  0000000000000000(0000) GS:ffff88001fc00000(0000) knlGS:0000000000000000
[   68.704312] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[   68.704312] CR2: fffffffffffffff8 CR3: 000000001ca7a000 CR4: 00000000000006f0
[   68.704312] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   68.704312] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   68.704312] Process kworker/0:1 (pid: 17, threadinfo ffff88001daa2000, task ffff88001da79720)
[   68.704312] Stack:
[   68.704312]  ffffffff8107cdf5 ffff88001daa3778 ffff88001fc12a40 ffff88001daa37f8
[   68.704312]  ffffffff8160b327 ffff88001daa37b8 ffff88001da79720 ffff88001daa3fd8
[   68.704312]  ffff88001daa3fd8 ffff88001daa3fd8 0000000000012a40 ffff88001daa37e8
[   68.704312] Call Trace:
[   68.704312]  [<ffffffff8107cdf5>] ? wq_worker_sleeping+0x15/0xa0
[   68.704312]  [<ffffffff8160b327>] __schedule+0x5e7/0x700
[   68.704312]  [<ffffffff81056eff>] schedule+0x3f/0x60
[   68.704312]  [<ffffffff810630d3>] do_exit+0x273/0x440
[   68.704312]  [<ffffffff8160e7d0>] oops_end+0xb0/0xf0
[   68.704312]  [<ffffffff815f5d22>] no_context+0x145/0x152
[   68.704312]  [<ffffffff815f5ebd>] __bad_area_nosemaphore+0x18e/0x1b1
[   68.704312]  [<ffffffff812f171e>] ? vsnprintf+0x35e/0x620
[   68.704312]  [<ffffffff815f5ef3>] bad_area_nosemaphore+0x13/0x15
[   68.704312]  [<ffffffff816110fd>] do_page_fault+0x43d/0x530
[   68.704312]  [<ffffffff8105efcf>] ? console_unlock+0xbf/0x120
[   68.704312]  [<ffffffff8105f04c>] ? console_trylock+0x1c/0x70
[   68.704312]  [<ffffffff810329a9>] ? default_spin_lock_flags+0x9/0x10
[   68.704312]  [<ffffffff814dc8a3>] ? pci_conf1_read+0xc3/0x120
[   68.704312]  [<ffffffff81610905>] do_async_page_fault+0x35/0x80
[   68.704312]  [<ffffffff8160db45>] async_page_fault+0x25/0x30
[   68.704312]  [<ffffffff8130a6a5>] ? pci_find_capability+0x15/0x60
[   68.704312]  [<ffffffff8131a9f7>] pci_set_payload+0xa7/0x140
[   68.704312]  [<ffffffff8131ae38>] pci_configure_slot.part.6+0x18/0x100
[   68.704312]  [<ffffffff8131af52>] pci_configure_slot+0x32/0x40
[   68.704312]  [<ffffffffa003a7e8>] enable_device+0x188/0x9a0 [acpiphp]
[   68.704312]  [<ffffffff81307349>] ? pci_bus_read_config_dword+0x89/0xa0
[   68.704312]  [<ffffffff813485a4>] ? acpi_os_wait_events_complete+0x23/0x23
[   68.704312]  [<ffffffffa0039ed0>] acpiphp_enable_slot+0x80/0xb0 [acpiphp]
[   68.704312]  [<ffffffffa0039fc4>] acpiphp_check_bridge.isra.12+0x64/0xf0 [acpiphp]
[   68.704312]  [<ffffffffa003a213>] handle_hotplug_event_func+0x103/0x1b0 [acpiphp]
[   68.704312]  [<ffffffff8134b0ff>] ? acpi_bus_get_device+0x27/0x40
[   68.704312]  [<ffffffff81357db3>] acpi_ev_notify_dispatch+0x67/0x7e
[   68.704312]  [<ffffffff813485cb>] acpi_os_execute_deferred+0x27/0x34
[   68.704312]  [<ffffffff8107bafa>] process_one_work+0x11a/0x480
[   68.704312]  [<ffffffff8107c8a5>] worker_thread+0x165/0x370
[   68.704312]  [<ffffffff8107c740>] ? manage_workers.isra.30+0x130/0x130
[   68.704312]  [<ffffffff81080cec>] kthread+0x8c/0xa0
[   68.704312]  [<ffffffff81616824>] kernel_thread_helper+0x4/0x10
[   68.704312]  [<ffffffff81080c60>] ? flush_kthread_worker+0xa0/0xa0
[   68.704312]  [<ffffffff81616820>] ? gs_change+0x13/0x13
[   68.704312] Code: 41 5f 5d c3 be 3e 01 00 00 48 c7 c7 f0 b1 9e 81 e8 65 d7 fd ff e9 74 fe ff ff 55 48 89 e5 66 66 66 66 90 48 8b 87 70 03 00 00 5d
[   68.704312]  8b 40 f8 c3 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 66 66
[   68.704312] RIP  [<ffffffff81081181>] kthread_data+0x11/0x20
[   68.704312]  RSP <ffff88001daa3760>
[   68.704312] CR2: fffffffffffffff8
[   68.704312] ---[ end trace 4ea71c2b1410e497 ]---
[   68.704312] Fixing recursive fault but reboot is needed!

[-- Attachment #3: vm_dumpxml.txt --]
[-- Type: text/plain, Size: 3294 bytes --]

<domain type='kvm' id='8'>
  <name>one-101201</name>
  <uuid>a50feee7-db83-0d82-b9ba-cfdae2dfb915</uuid>
  <memory>524288</memory>
  <currentMemory>524288</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc-1.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/nebula/nebula/var/101201/images/disk.0'/>
      <target dev='hda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/nebula/nebula/var/101201/images/disk.1'/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <source protocol='rbd' name='rbd/foo:id=admin:rbd_writeback_window=8000000'>
        <host name='10.177.64.4' port='6789'/>
        <host name='10.177.64.6' port='6789'/>
        <host name='10.177.64.8' port='6789'/>
      </source>
      <target dev='vdd' bus='virtio'/>
      <alias name='virtio-disk3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x1a' slot='0x10' function='0x3'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x1a' slot='0x10' function='0x2'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='apparmor' relabel='yes'>
    <label>libvirt-a50feee7-db83-0d82-b9ba-cfdae2dfb915</label>
    <imagelabel>libvirt-a50feee7-db83-0d82-b9ba-cfdae2dfb915</imagelabel>
  </seclabel>
</domain>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RBD attach via libvirt to kvm vm - VM kernel hang
  2012-03-28 13:27 RBD attach via libvirt to kvm vm - VM kernel hang Sławomir Skowron
@ 2012-03-28 16:13 ` Tommi Virtanen
  2012-03-28 16:24   ` Josh Durgin
  0 siblings, 1 reply; 5+ messages in thread
From: Tommi Virtanen @ 2012-03-28 16:13 UTC (permalink / raw)
  To: Sławomir Skowron, Josh Durgin; +Cc: ceph-devel

2012/3/28 Sławomir Skowron <szibis@gmail.com>:
> VM on-01 with config like in attachment (dumpxml) - hang, after
> attaching rbd device with kernel_bug in attachment.

[   68.630913] BUG: unable to handle kernel NULL pointer dereference
at 0000000000000049
[   68.632016] IP: [<ffffffff8130a6a5>] pci_find_capability+0x15/0x60
...
[   68.632016] Call Trace:
[   68.632016]  [<ffffffff8131a9f7>] pci_set_payload+0xa7/0x140
[   68.632016]  [<ffffffff8131ae38>] pci_configure_slot.part.6+0x18/0x100
[   68.632016]  [<ffffffff8131af52>] pci_configure_slot+0x32/0x40
[   68.632016]  [<ffffffffa003a7e8>] enable_device+0x188/0x9a0 [acpiphp]
[   68.632016]  [<ffffffff81307349>] ? pci_bus_read_config_dword+0x89/0xa0
...

Well, that sure looks like a bug. I can't tell whether it's in QEmu,
the QEmu rbd driver, or what. Josh, have you seen a crash like this?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RBD attach via libvirt to kvm vm - VM kernel hang
  2012-03-28 16:13 ` Tommi Virtanen
@ 2012-03-28 16:24   ` Josh Durgin
  2012-03-28 19:36     ` Sławomir Skowron
  0 siblings, 1 reply; 5+ messages in thread
From: Josh Durgin @ 2012-03-28 16:24 UTC (permalink / raw)
  To: Tommi Virtanen; +Cc: Sławomir Skowron, ceph-devel

On 03/28/2012 09:13 AM, Tommi Virtanen wrote:
> 2012/3/28 Sławomir Skowron<szibis@gmail.com>:
>> VM on-01 with config like in attachment (dumpxml) - hang, after
>> attaching rbd device with kernel_bug in attachment.
>
> [   68.630913] BUG: unable to handle kernel NULL pointer dereference
> at 0000000000000049
> [   68.632016] IP: [<ffffffff8130a6a5>] pci_find_capability+0x15/0x60
> ...
> [   68.632016] Call Trace:
> [   68.632016]  [<ffffffff8131a9f7>] pci_set_payload+0xa7/0x140
> [   68.632016]  [<ffffffff8131ae38>] pci_configure_slot.part.6+0x18/0x100
> [   68.632016]  [<ffffffff8131af52>] pci_configure_slot+0x32/0x40
> [   68.632016]  [<ffffffffa003a7e8>] enable_device+0x188/0x9a0 [acpiphp]
> [   68.632016]  [<ffffffff81307349>] ? pci_bus_read_config_dword+0x89/0xa0
> ...
>
> Well, that sure looks like a bug. I can't tell whether it's in QEmu,
> the QEmu rbd driver, or what. Josh, have you seen a crash like this?

I've not seen a crash like this before. I'm not aware of RBD being
treated differently from other block devices in the pci layer in qemu,
so I'd guess this is a qemu or guest kernel bug.

Does attaching a non-rbd disk (still using the virtio driver) cause the
same problem? If not, what distro and kernel version is the guest
running?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RBD attach via libvirt to kvm vm - VM kernel hang
  2012-03-28 16:24   ` Josh Durgin
@ 2012-03-28 19:36     ` Sławomir Skowron
  2012-04-02 16:33       ` Tommi Virtanen
  0 siblings, 1 reply; 5+ messages in thread
From: Sławomir Skowron @ 2012-03-28 19:36 UTC (permalink / raw)
  To: Josh Durgin; +Cc: Tommi Virtanen, ceph-devel

On 28 mar 2012, at 18:24, Josh Durgin <josh.durgin@dreamhost.com> wrote:

> On 03/28/2012 09:13 AM, Tommi Virtanen wrote:
>> 2012/3/28 Sławomir Skowron<szibis@gmail.com>:
>>> VM on-01 with config like in attachment (dumpxml) - hang, after
>>> attaching rbd device with kernel_bug in attachment.
>>
>> [   68.630913] BUG: unable to handle kernel NULL pointer dereference
>> at 0000000000000049
>> [   68.632016] IP: [<ffffffff8130a6a5>] pci_find_capability+0x15/0x60
>> ...
>> [   68.632016] Call Trace:
>> [   68.632016]  [<ffffffff8131a9f7>] pci_set_payload+0xa7/0x140
>> [   68.632016]  [<ffffffff8131ae38>] pci_configure_slot.part.6+0x18/0x100
>> [   68.632016]  [<ffffffff8131af52>] pci_configure_slot+0x32/0x40
>> [   68.632016]  [<ffffffffa003a7e8>] enable_device+0x188/0x9a0 [acpiphp]
>> [   68.632016]  [<ffffffff81307349>] ? pci_bus_read_config_dword+0x89/0xa0
>> ...
>>
>> Well, that sure looks like a bug. I can't tell whether it's in QEmu,
>> the QEmu rbd driver, or what. Josh, have you seen a crash like this?
>
> I've not seen a crash like this before. I'm not aware of RBD being
> treated differently from other block devices in the pci layer in qemu,
> so I'd guess this is a qemu or guest kernel bug.
>
> Does attaching a non-rbd disk (still using the virtio driver) cause the
> same problem?

File disk via virtio exist in vm config. But this configuration works
stable for some time without rbd.

We use for tests qemu-kvm 1.0.0+dfsg+rc2-1~oneiric1 with distro
libvirt, but we experience some stabilization problems with vf
functions on Intel 10GE cards. Rbd cause some problems too.
With same kernel as used now on Guest, and with this 1.0.0-rc2 kvm rbd
attaching goes smooth, but after some time, when rbd was mounted in
VM, network goes down.
Only with console/vnc we can go into VM, and then reload network.
After that everything was fine for some time, and again.
That's why we try newer libvirt, and kvm. And now vf in Intel network
card work like a charm, but after attaching rbd crash the vm as you
can see. Kernel on VM is always stay as bottom.

We try to create kernel on newer version of stable in oneiric, and try
to reproduce, but i feel that kernel is not a trigger for this bug.

> If not, what distro and kernel version is the guest
> running?

It's distro kernel 3.0.0-13-server from ubuntu oneiric with compiled
compatibility of xen. That's why is 3.0.0-13-xen.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RBD attach via libvirt to kvm vm - VM kernel hang
  2012-03-28 19:36     ` Sławomir Skowron
@ 2012-04-02 16:33       ` Tommi Virtanen
  0 siblings, 0 replies; 5+ messages in thread
From: Tommi Virtanen @ 2012-04-02 16:33 UTC (permalink / raw)
  To: Sławomir Skowron; +Cc: Josh Durgin, ceph-devel

2012/3/28 Sławomir Skowron <szibis@gmail.com>:
> With same kernel as used now on Guest, and with this 1.0.0-rc2 kvm rbd
> attaching goes smooth, but after some time, when rbd was mounted in
> VM, network goes down.
> Only with console/vnc we can go into VM, and then reload network.

Frankly, I don't see a way how the QEmu RBD driver could cause
problems for your vm networking, barring just memory corruption. I
think your setup has problems outside of RBD. Sorry. Do let us know if
you find out more.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-04-02 16:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-28 13:27 RBD attach via libvirt to kvm vm - VM kernel hang Sławomir Skowron
2012-03-28 16:13 ` Tommi Virtanen
2012-03-28 16:24   ` Josh Durgin
2012-03-28 19:36     ` Sławomir Skowron
2012-04-02 16:33       ` Tommi Virtanen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.