All of lore.kernel.org
 help / color / mirror / Atom feed
* xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
@ 2014-05-17 17:46 Jo Mills
  2014-05-19 10:58 ` Jan Beulich
  2014-05-20 10:14 ` David Vrabel
  0 siblings, 2 replies; 15+ messages in thread
From: Jo Mills @ 2014-05-17 17:46 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: Type: text/plain, Size: 17723 bytes --]

Hi,

I started with a thread on xen-users and was asked by Ian Campbell if I 
would take more logs and raise the issue with xen-devel.  Please see 
thread with subject "Create domU with pciback fails, then my system 
re-boots! xen-hypervisor-4.3-amd64" on xen-users for more background 
if you want it.  Below I have to tried to list the steps I have taken 
to reproduce the problem.


[1] Machine "green"

        Intel S32000SHV motherboard
        8 GB RAM
        GenuineIntel Intel(R) Core(TM)2 Quad CPU
        BIOS version S3200X38.86B.00.00.52

    (Machine blue is the same hardware - I use them as a pair of DRBD 
     connected hosts).

        
[2] xen and linux version on green:

        dom0 is Debian Jessie:

    Linux version 3.13-1-amd64 (debian-kernel@lists.debian.org) \
        (gcc version 4.8.2 (Debian 4.8.2-16) ) \
            #1 SMP Debian 3.13.10-1 (2014-04-15)

    xen-hypervisor-4.3-amd64            4.3.0-3+b1
    xen-system-amd64                    4.3.0-3+b1
    xen-tools                           4.4-1     
    xen-utils-4.3                       4.3.0-3+b1
    xen-utils-common                    4.3.0-3   
    xenstore-utils                      4.3.0-3+b1


    system is up to date from Jessie as of Sat 17 May 16:58:01 BST 2014
    

[3] grub2 configured as:

    GRUB_CMDLINE_XEN="dom0_mem=2G,max:2G dom0_max_vcpus=1 \
        dom0_vcpus_pin vga=gfx-1280x1024x16 noreboot loglvl=all \
            guest_loglvl=all com1=115200,8n1,0x3f8,4 console=com1,vga"
    GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen"

    null modem cable connected to screen session on my desktop PC.
    

[4] There are four ethernet devices fitted, and after the various udev
    renaming of Ethernet interfaces these come out as being:

    eth0 via-rhine 0000:04:00.0 assigned for zone LOC xenbr0

    eth1 via-rhine 0000:04:01.0 assigned for zone DMZ (pci-passthrough)

    eth2 e1000 0000:04:02.0 used for DBRB

    eth3 e1000e 0000:01:00.0 planned for Windows client domU


[5] I have patched /etc/xen/scripts/block-drbd as per 
    http://lists.xen.org/archives/html/xen-devel/2014-02/msg01190.html 
    so there are two case statements modifed to be:
    
            case $t in 
                drbd|phy)
                drbd_resource=$p
                drbd_role="$(/sbin/drbdadm role $drbd_resource)"
                    .
                    .
                    .
                    
    and
    
        remove)
            case $t in 
                drbd|phy)
                p=$(xenstore_read "$XENBUS_PATH/params")
                drbd_resource=$p
                    .
                    .
                    .



[6] Start test with the following BIOS options all disbaled:

        Intel(R) Virtualization Technology
        Intel(R) VT for Directed I/O
        Multi-Thread Support In MPS table
        Execute Disable Bit


    Re-boot green from console, login via SSH from desktop machine.
    
    ~#  xl pci-assignable-list
    0000:01:00.0
    0000:04:01.0

    so these two cards are available for pci passthrough.  DRBD is 
    working fine with machine "blue" (which is running wheezy and uses 
    xm). 
    
    On blue, shutdown vm-server-21 and when it has gone, start it on 
    machine green, this vm uses device 04:01.0.
    
    The vm starts as expected and DRBD is happy.
    
    ~# xl list          
    Name                ID   Mem VCPUs      State   Time(s)
    Domain-0             0  2046     1     r-----      18.4
    vm-server-21         1  1020     1     -b----      16.9

    Shutdown this vm on green and re-start it on blue.  Logout of SSH 
    session and re-start green from console.  Go into BIOS and enable:
    
        Intel(R) Virtualization Technology
        Intel(R) VT for Directed I/O
    
    login via SSH from desktop machine.
    
    ~# xl dmesg 
    
        contains    
        
    (XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
    (XEN) traps.c:3061: GPF (0000): ffff82c4c02772f8 -> ffff82c4c0218927
    (XEN) Intel VT-d Snoop Control not enabled.
    (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
    (XEN) Intel VT-d Queued Invalidation not enabled.
    (XEN) Intel VT-d Interrupt Remapping not enabled.
    (XEN) Intel VT-d Shared EPT tables not enabled.
    (XEN) I/O virtualisation enabled

        so I am hopeful that at some time I might get a windows domU 
        to run (yet "grep vmx /proc/cpuinfo" returns nothing, so maybe 
        it won't work).  As before, the same two ethernet cards are 
        available via passthrough.
        
    ~#  xl pci-assignable-list
    0000:01:00.0
    0000:04:01.0
    
    
    If I now shutdown vm-server-21 on node blue, and re-start it green 

~# xl -vvv create -c /etc/xen/vm-server-21.cfg
Parsing config from /etc/xen/vm-server-21.cfg
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x19671f0: create: how=(nil) callback=(nil) poller=0x1966ba0
libxl: verbose: libxl_create.c:130:libxl__domain_build_info_setdefault: qemu-xen is unavailable, use qemu-xen-traditional instead: No such file or directory
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=unknown
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvdb, uses script=... assuming phy backend
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk vdev=xvdb, using backend phy
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=0x1967608: deregister unregistered
libxl: debug: libxl_x86.c:82:e820_sanitize: Memory: 1048576kB End of RAM: 0x40000 (PFN) Delta: 0kB, PCI start: 3665832kB (0xdfbea PFN), Balloon 0kB

libxl: debug: libxl_x86.c:201:e820_sanitize: :  [0 -> 40000] RAM
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [40000 -> dfbea] Unusable
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfbea -> dfc96] ACPI NVS
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfc96 -> dfcfa] Unusable
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfcfa -> dfd5f] Reserved
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfd5f -> dfd69] Unusable
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfd69 -> dfddf] ACPI NVS
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfddf -> dfde5] Unusable
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfde5 -> dfdff] ACPI
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfdff -> dfe00] Unusable
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [dfe00 -> dff00] Reserved
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [f0000 -> f4000] Reserved
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [fee00 -> fee01] Reserved
libxl: debug: libxl_x86.c:201:e820_sanitize: :  [fff80 -> fff8c] Reserved
domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvdb ro xencons=tty swiotlb=force", features="(null)"
libxl: debug: libxl_dom.c:341:libxl__build_pv: pv kernel mapped 0 path /boot/vmlinuz-2.6.26-2-xen-amd64

domainbuilder: detail: xc_dom_kernel_file: filename="/boot/vmlinuz-2.6.26-2-xen-amd64"
domainbuilder: detail: xc_dom_malloc_filemap    : 1666 kB
domainbuilder: detail: xc_dom_malloc            : 7801 kB
domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x1a0b72 -> 0x79e530
domainbuilder: detail: xc_dom_ramdisk_file: filename="/boot/initrd.img-2.6.26-2-xen-amd64"
domainbuilder: detail: xc_dom_malloc_filemap    : 7926 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.3, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... 
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ... 
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x200000 memsz=0x2fe000
xc: detail: elf_parse_binary: phdr: paddr=0x4fe000 memsz=0x529a8
xc: detail: elf_parse_binary: phdr: paddr=0x551000 memsz=0x888
xc: detail: elf_parse_binary: phdr: paddr=0x552000 memsz=0xdf918
xc: detail: elf_parse_binary: memory: 0x200000 -> 0x631918
xc: detail: elf_xen_parse_note: GUEST_OS = "linux"
xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6"
xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0"
xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0
xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff80200000
xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff80208000
xc: detail: elf_xen_parse_note: unknown xen elf note (0xd)
xc: detail: elf_xen_parse_note: FEATURES = "writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel"
xc: detail: elf_xen_parse_note: LOADER = "generic"
xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1
xc: detail: elf_xen_addr_calc_check: addresses:
xc: detail:     virt_base        = 0xffffffff80000000
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0xffffffff80000000
xc: detail:     virt_kstart      = 0xffffffff80200000
xc: detail:     virt_kend        = 0xffffffff80631918
xc: detail:     virt_entry       = 0xffffffff80200000
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff80200000 -> 0xffffffff80631918
domainbuilder: detail: xc_dom_mem_init: mem 1024 MB, pages 0x40000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x40000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64
domainbuilder: detail: xc_dom_malloc            : 2048 kB
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0xffffffff80200000 -> 0xffffffff80632000  (pfn 0x200 + 0x432 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x200+0x432 at 0x7f6b8b0cc000
xc: detail: elf_load_binary: phdr 0 at 0x7f6b8b0cc000 -> 0x7f6b8b3ca000
xc: detail: elf_load_binary: phdr 1 at 0x7f6b8b3ca000 -> 0x7f6b8b41c9a8
xc: detail: elf_load_binary: phdr 2 at 0x7f6b8b41d000 -> 0x7f6b8b41d888
xc: detail: elf_load_binary: phdr 3 at 0x7f6b8b41e000 -> 0x7f6b8b45c6b0
domainbuilder: detail: xc_dom_alloc_segment:   ramdisk      : 0xffffffff80632000 -> 0xffffffff81d14000  (pfn 0x632 + 0x16e2 pages)
domainbuilder: detail: xc_dom_malloc            : 137 kB
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x632+0x16e2 at 0x7f6b899ea000
domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x7bd8b0 -> 0x16e1610
domainbuilder: detail: xc_dom_alloc_segment:   phys2mach    : 0xffffffff81d14000 -> 0xffffffff81f14000  (pfn 0x1d14 + 0x200 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x1d14+0x200 at 0x7f6b897ea000
domainbuilder: detail: xc_dom_alloc_page   :   start info   : 0xffffffff81f14000 (pfn 0x1f14)
domainbuilder: detail: xc_dom_alloc_page   :   xenstore     : 0xffffffff81f15000 (pfn 0x1f15)
domainbuilder: detail: xc_dom_alloc_page   :   console      : 0xffffffff81f16000 (pfn 0x1f16)
domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff81ffffff, 16 table(s)
domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xffffffff81f17000 -> 0xffffffff81f2a000  (pfn 0x1f17 + 0x13 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x1f17+0x13 at 0x7f6b8e36f000
domainbuilder: detail: xc_dom_alloc_page   :   boot stack   : 0xffffffff81f2a000 (pfn 0x1f2a)
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0xffffffff81f2b000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0xffffffff82000000
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x40000
domainbuilder: detail: clear_page: pfn 0x1f16, mfn 0x198307
domainbuilder: detail: clear_page: pfn 0x1f15, mfn 0x198308
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x1f14+0x1 at 0x7f6b8e550000
domainbuilder: detail: start_info_x86_64: called
domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff80208000 pfn=0x208
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 10027 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 9593 kB
domainbuilder: detail:       domU mmap          : 29856 kB
domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xdfc99
domainbuilder: detail: shared_info_x86_64: called
domainbuilder: detail: vcpu_x86_64: called
domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x1f17 mfn 0x198306
domainbuilder: detail: launch_vm: called, ctxt=0x7fff4af4aee0
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvdb, uses script=... assuming phy backend
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1968528 wpath=/local/domain/0/backend/vbd/1/51728/state token=3/0: register slotnum=3
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x196fb78 wpath=/local/domain/0/backend/vbd/1/51712/state token=2/1: register slotnum=2
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x19671f0: inprogress: poller=0x1966ba0, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1968528 wpath=/local/domain/0/backend/vbd/1/51728/state token=3/0: event epath=/local/domain/0/backend/vbd/1/51728/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend /local/domain/0/backend/vbd/1/51728/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x196fb78 wpath=/local/domain/0/backend/vbd/1/51712/state token=2/1: event epath=/local/domain/0/backend/vbd/1/51712/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend /local/domain/0/backend/vbd/1/51712/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1968528 wpath=/local/domain/0/backend/vbd/1/51728/state token=3/0: event epath=/local/domain/0/backend/vbd/1/51728/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend /local/domain/0/backend/vbd/1/51728/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=0x1968528 wpath=/local/domain/0/backend/vbd/1/51728/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=0x1968528: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/block-drbd add
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x196fb78 wpath=/local/domain/0/backend/vbd/1/51712/state token=2/1: event epath=/local/domain/0/backend/vbd/1/51712/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend /local/domain/0/backend/vbd/1/51712/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=0x196fb78 wpath=/local/domain/0/backend/vbd/1/51712/state token=2/1: deregister slotnum=2
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=0x196fb78: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/block add

    and it just sits here.


    I get the following at the end of the serial console log:
    
    [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
    [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
    (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.

    and that's "green" now stuck.  It also "did something horrible" to 
    node blue as there all my vm's seem to have vanished - but that's 
    a different problem.  Re-boot both green and blue, re-start vm's 
    on node blue which seems OK now, DRBD is also happy.
    
    Serial console log for node green attached as 
    screenlog.0.serial_console_log.tgz.
    
    If there is anything more I can add, please let me know and I'll 
    try my best to get more information.

Best regards,

Jo.

[-- Attachment #2: screenlog.0.serial_console_log.tgz --]
[-- Type: application/x-gtar-compressed, Size: 7317 bytes --]

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-17 17:46 xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic Jo Mills
@ 2014-05-19 10:58 ` Jan Beulich
  2014-05-19 11:03   ` Ian Campbell
  2014-05-20 10:14 ` David Vrabel
  1 sibling, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2014-05-19 10:58 UTC (permalink / raw)
  To: Jo Mills; +Cc: xen-devel

>>> On 17.05.14 at 19:46, <jo@maniscorse.co.uk> wrote:
> [2] xen and linux version on green:
> 
>         dom0 is Debian Jessie:
> 
>     Linux version 3.13-1-amd64 (debian-kernel@lists.debian.org) \
>         (gcc version 4.8.2 (Debian 4.8.2-16) ) \
>             #1 SMP Debian 3.13.10-1 (2014-04-15)
> 
>     xen-hypervisor-4.3-amd64            4.3.0-3+b1
>     xen-system-amd64                    4.3.0-3+b1
>     xen-tools                           4.4-1     

You clearly should be using matching hypervisor and tools versions.

Jan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-19 10:58 ` Jan Beulich
@ 2014-05-19 11:03   ` Ian Campbell
  0 siblings, 0 replies; 15+ messages in thread
From: Ian Campbell @ 2014-05-19 11:03 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Jo Mills, xen-devel

On Mon, 2014-05-19 at 11:58 +0100, Jan Beulich wrote:
> >>> On 17.05.14 at 19:46, <jo@maniscorse.co.uk> wrote:
> > [2] xen and linux version on green:
> > 
> >         dom0 is Debian Jessie:
> > 
> >     Linux version 3.13-1-amd64 (debian-kernel@lists.debian.org) \
> >         (gcc version 4.8.2 (Debian 4.8.2-16) ) \
> >             #1 SMP Debian 3.13.10-1 (2014-04-15)
> > 
> >     xen-hypervisor-4.3-amd64            4.3.0-3+b1
> >     xen-system-amd64                    4.3.0-3+b1
> >     xen-tools                           4.4-1     
> 
> You clearly should be using matching hypervisor and tools versions.

In Debian xen-tools is the xen-create-image helpers et al from
http://xen-tools.org/. The Xen utilities packages are xen-utils-X.Y.

Confusing I know...

Ian.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-17 17:46 xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic Jo Mills
  2014-05-19 10:58 ` Jan Beulich
@ 2014-05-20 10:14 ` David Vrabel
  2014-05-20 10:18   ` Ian Campbell
  1 sibling, 1 reply; 15+ messages in thread
From: David Vrabel @ 2014-05-20 10:14 UTC (permalink / raw)
  To: Jo Mills, xen-devel

On 17/05/14 18:46, Jo Mills wrote:
> 
>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.

We need the full backtrace from the kernel.

David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-20 10:14 ` David Vrabel
@ 2014-05-20 10:18   ` Ian Campbell
  2014-05-20 16:20     ` David Vrabel
  0 siblings, 1 reply; 15+ messages in thread
From: Ian Campbell @ 2014-05-20 10:18 UTC (permalink / raw)
  To: David Vrabel; +Cc: Jo Mills, xen-devel

On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
> On 17/05/14 18:46, Jo Mills wrote:
> > 
> >     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
> >     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
> >     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> 
> We need the full backtrace from the kernel.

It was in the attached serial console log. I've pasted what I think is
the relevant bit below.

Ian.


[  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
[  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
[  486.876191]   Tx Queue             <0>
[  486.876191]   TDH                  <a5>
[  486.876191]   TDT                  <b5>
[  486.876191]   next_to_use          <b5>
[  486.876191]   next_to_clean        <a5>
[  486.876191] buffer_info[next_to_clean]
[  486.876191]   time_stamp           <10000b439>
[  486.876191]   next_to_watch        <a6>
[  486.876191]   jiffies              <10000b67f>
[  486.876191]   next_to_watch.status <0>
[  488.880201] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
[  488.880201]   Tx Queue             <0>
[  488.880201]   TDH                  <a5>
[  488.880201]   TDT                  <b5>
[  488.880201]   next_to_use          <b5>
[  488.880201]   next_to_clean        <a5>
[  488.880201] buffer_info[next_to_clean]
[  488.880201]   time_stamp           <10000b439>
[  488.880201]   next_to_watch        <a6>
[  488.880201]   jiffies              <10000b874>
[  488.880201]   next_to_watch.status <0>
[  490.884218] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
[  490.884218]   Tx Queue             <0>
[  490.884218]   TDH                  <a5>
[  490.884218]   TDT                  <b5>
[  490.884218]   next_to_use          <b5>
[  490.884218]   next_to_clean        <a5>
[  490.884218] buffer_info[next_to_clean]
[  490.884218]   time_stamp           <10000b439>
[  490.884218]   next_to_watch        <a6>
[  490.884218]   jiffies              <10000ba69>
[  490.884218]   next_to_watch.status <0>
[  490.924060] d-con vm-12-disk: PingAck did not arrive in time.
[  492.416058] d-con vm-13-disk: PingAck did not arrive in time.
[  492.832061] e1000 0000:04:02.0 eth2: Reset adapter
[  493.256061] d-con vm-21-disk: PingAck did not arrive in time.
[  493.280521] d-con vm-21-disk: out of mem, failed to invoke fence-peer helper
[  498.336083] BUG: unable to handle kernel paging request at ffff880076070000
[  498.336101] IP: [<ffffffff8127cb3d>] memcpy+0xd/0x110
[  498.336109] PGD 180d067 PUD 1ab0067 PMD 7fc5a067 PTE 0
[  498.336114] Oops: 0000 [#1] SMP 
[  498.336118] Modules linked in: xen_blkback xen_gntdev xen_evtchn xenfs xen_privcmd drbd lru_cache crc32c libcrc32c bridge stp llc loop coretemp iTCO_wdt iTCO_vendor_support psmouse pcspkr serio_raw evdev i2c_i801 i2c_core i3200_edac edac_core lpc_ich mfd_core processor thermal_sys button ext4 crc16 mbcache jbd2 dm_mod sg sd_mod sr_mod crc_t10dif crct10dif_common cdrom via_rhine e1000e ata_generic xen_pciback ata_piix mii e1000 libata scsi_mod ptp pps_core floppy uhci_hcd ehci_pci ehci_hcd usbcore usb_common
[  498.336163] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W    3.13-1-amd64 #1 Debian 3.13.10-1
[  498.336168] Hardware name: Intel Corporation S3210SH/S3210SH, BIOS S3200X38.86B.00.00.0052.112920101508 11/29/2010
[  498.336172] task: ffffffff81813460 ti: ffffffff81800000 task.ti: ffffffff81800000
[  498.336175] RIP: e030:[<ffffffff8127cb3d>]  [<ffffffff8127cb3d>] memcpy+0xd/0x110
[  498.336180] RSP: e02b:ffff88007f603b50  EFLAGS: 00010246
[  498.336183] RAX: ffff88007a320800 RBX: 0000000000000001 RCX: 00000000000000b1
[  498.336186] RDX: 0000000000000000 RSI: ffff880076070000 RDI: ffff88007a320800
[  498.336189] RBP: 000000007a320800 R08: 0000000000000001 R09: 0000000000000000
[  498.336192] R10: 0000000000000007 R11: 00000000000002c0 R12: 0000000000000001
[  498.336195] R13: 0000000000000380 R14: 0000000000200000 R15: 0000000000002241
[  498.336201] FS:  00007f5ad4dba800(0000) GS:ffff88007f600000(0000) knlGS:0000000000000000
[  498.336204] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  498.336207] CR2: ffff880076070000 CR3: 00000000040a5000 CR4: 0000000000042660
[  498.336211] Stack:
[  498.336213]  ffffffff812933e7 0000000000000001 ffff8800788ee098 0000000000000588
[  498.336218]  0000000000000288 0000000076070000 000000017f603c20 0000000000000588
[  498.336222]  ffff8800788ee098 00000001a18b7000 0000000000000000 0000000076070000
[  498.336227] Call Trace:
[  498.336229]  <IRQ> 
[  498.336231]  [<ffffffff812933e7>] ? swiotlb_tbl_map_single+0x257/0x2a0
[  498.336239]  [<ffffffff81317526>] ? xen_swiotlb_map_page+0xd6/0x2d0
[  498.336246]  [<ffffffffa00fb0d6>] ? e1000_xmit_frame+0x7f6/0xf70 [e1000]
[  498.336251]  [<ffffffff813c081a>] ? dev_hard_start_xmit+0x2da/0x4f0
[  498.336256]  [<ffffffff8143087c>] ? fib_table_lookup+0x2dc/0x390
[  498.336261]  [<ffffffff813dd9d1>] ? sch_direct_xmit+0xc1/0x190
[  498.336264]  [<ffffffff813c0c2d>] ? __dev_queue_xmit+0x1ed/0x480
[  498.336269]  [<ffffffff813f6018>] ? ip_finish_output+0x2b8/0x380
[  498.336273]  [<ffffffff813f6ec9>] ? ip_queue_xmit+0x129/0x3a0
[  498.336277]  [<ffffffff8140cb25>] ? tcp_transmit_skb+0x425/0x8a0
[  498.336281]  [<ffffffff8140e55f>] ? tcp_retransmit_skb+0xf/0xf0
[  498.336284]  [<ffffffff814102dc>] ? tcp_retransmit_timer+0x27c/0x6d0
[  498.336318]  [<ffffffff814108e0>] ? tcp_write_timer_handler+0x1b0/0x1b0
[  498.336321]  [<ffffffff814107d0>] ? tcp_write_timer_handler+0xa0/0x1b0
[  498.336325]  [<ffffffff81410938>] ? tcp_write_timer+0x58/0x60
[  498.336329]  [<ffffffff81066cfc>] ? call_timer_fn+0x2c/0x100
[  498.336333]  [<ffffffff814108e0>] ? tcp_write_timer_handler+0x1b0/0x1b0
[  498.336337]  [<ffffffff81067b39>] ? run_timer_softirq+0x1f9/0x2b0
[  498.336341]  [<ffffffff81060802>] ? __do_softirq+0xe2/0x260
[  498.336344]  [<ffffffff81060c35>] ? irq_exit+0x95/0xa0
[  498.336348]  [<ffffffff8130e807>] ? xen_evtchn_do_upcall+0x27/0x40
[  498.336353]  [<ffffffff814b047e>] ? xen_do_hypervisor_callback+0x1e/0x30
[  498.336356]  <EOI> 
[  498.336358]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
[  498.336364]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
[  498.336368]  [<ffffffff8100997c>] ? xen_safe_halt+0xc/0x20
[  498.336372]  [<ffffffff8101a5e4>] ? default_idle+0x14/0xb0
[  498.336377]  [<ffffffff810a77be>] ? cpu_startup_entry+0xbe/0x280
[  498.336381]  [<ffffffff818c4f2d>] ? start_kernel+0x44e/0x459
[  498.336384]  [<ffffffff818c4904>] ? repair_env_string+0x58/0x58
[  498.336388]  [<ffffffff818c6dbc>] ? xen_start_kernel+0x535/0x53f
[  498.336391] Code: fc ff ff 48 8b 43 58 48 2b 43 50 88 43 4e eb e9 90 90 90 90 90 90 90 90 90 90 90 90 90 90 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 <f3> 48 a5 89 d1 f3 a4 c3 20 4c 8b 06 4c 8b 4e 08 4c 8b 56 10 4c 
[  498.336420] RIP  [<ffffffff8127cb3d>] memcpy+0xd/0x110
[  498.336424]  RSP <ffff88007f603b50>
[  498.336426] CR2: ffff880076070000
[  498.336430] ---[ end trace b9630577ecf84cd8 ]---
[  498.340053] Kernel panic - not syncing: Fatal exception in interrupt

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-20 10:18   ` Ian Campbell
@ 2014-05-20 16:20     ` David Vrabel
  2014-05-20 18:41       ` Jo Mills
  0 siblings, 1 reply; 15+ messages in thread
From: David Vrabel @ 2014-05-20 16:20 UTC (permalink / raw)
  To: Ian Campbell, David Vrabel; +Cc: Jo Mills, xen-devel

On 20/05/14 11:18, Ian Campbell wrote:
> On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
>> On 17/05/14 18:46, Jo Mills wrote:
>>>
>>>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
>>>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
>>>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
>>
>> We need the full backtrace from the kernel.
> 
> It was in the attached serial console log. I've pasted what I think is
> the relevant bit below.

Ah. I hadn't noticed the attachment.

> [  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
> [  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
> [  486.876191]   Tx Queue             <0>
> [  486.876191]   TDH                  <a5>
> [  486.876191]   TDT                  <b5>
> [  486.876191]   next_to_use          <b5>
> [  486.876191]   next_to_clean        <a5>
> [  486.876191] buffer_info[next_to_clean]
> [  486.876191]   time_stamp           <10000b439>
> [  486.876191]   next_to_watch        <a6>
> [  486.876191]   jiffies              <10000b67f>
> [  486.876191]   next_to_watch.status <0>

This looks like an e1000 driver/hardware problem.  It doesn't
immediately look Xen specific.

David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-20 16:20     ` David Vrabel
@ 2014-05-20 18:41       ` Jo Mills
  2014-05-21  9:42         ` David Vrabel
  0 siblings, 1 reply; 15+ messages in thread
From: Jo Mills @ 2014-05-20 18:41 UTC (permalink / raw)
  To: David Vrabel; +Cc: Ian Campbell, xen-devel

On Tue, May 20, 2014 at 05:20:10PM +0100, David Vrabel wrote:
> On 20/05/14 11:18, Ian Campbell wrote:
> > On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
> >> On 17/05/14 18:46, Jo Mills wrote:
> >>>
> >>>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
> >>>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
> >>>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> >>
> >> We need the full backtrace from the kernel.
> > 
> > It was in the attached serial console log. I've pasted what I think is
> > the relevant bit below.
> 
> Ah. I hadn't noticed the attachment.
> 
> > [  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
> > [  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
> > [  486.876191]   Tx Queue             <0>
> > [  486.876191]   TDH                  <a5>
> > [  486.876191]   TDT                  <b5>
> > [  486.876191]   next_to_use          <b5>
> > [  486.876191]   next_to_clean        <a5>
> > [  486.876191] buffer_info[next_to_clean]
> > [  486.876191]   time_stamp           <10000b439>
> > [  486.876191]   next_to_watch        <a6>
> > [  486.876191]   jiffies              <10000b67f>
> > [  486.876191]   next_to_watch.status <0>
> 
> This looks like an e1000 driver/hardware problem.  It doesn't
> immediately look Xen specific.
> 
> David

Hi David et al,

Many thanks for your replies and for your time spent looking into this 
problem.  

I probably should have mentioned in my report that when BIOS settings:

    Intel(R) Virtualization Technology
    Intel(R) VT for Directed I/O
    
are enabled, creating VMs that do not use a pci passthrough device 
works just fine.  It's only when I try and create a VM that does use a 
pci passthrough device that the crash happens.  Whether this is 
relevant or not I'm not competent enough to say.

Thanks again for all your help,

Regards,

Jo.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-20 18:41       ` Jo Mills
@ 2014-05-21  9:42         ` David Vrabel
  2014-05-22  9:47           ` Konrad Rzeszutek Wilk
       [not found]           ` <20140522094744.GA8264@localhost.localdomain>
  0 siblings, 2 replies; 15+ messages in thread
From: David Vrabel @ 2014-05-21  9:42 UTC (permalink / raw)
  To: Jo Mills; +Cc: Ian Campbell, xen-devel

On 20/05/14 19:41, Jo Mills wrote:
> On Tue, May 20, 2014 at 05:20:10PM +0100, David Vrabel wrote:
>> On 20/05/14 11:18, Ian Campbell wrote:
>>> On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
>>>> On 17/05/14 18:46, Jo Mills wrote:
>>>>>
>>>>>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
>>>>>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
>>>>>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
>>>>
>>>> We need the full backtrace from the kernel.
>>>
>>> It was in the attached serial console log. I've pasted what I think is
>>> the relevant bit below.
>>
>> Ah. I hadn't noticed the attachment.
>>
>>> [  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
>>> [  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
>>> [  486.876191]   Tx Queue             <0>
>>> [  486.876191]   TDH                  <a5>
>>> [  486.876191]   TDT                  <b5>
>>> [  486.876191]   next_to_use          <b5>
>>> [  486.876191]   next_to_clean        <a5>
>>> [  486.876191] buffer_info[next_to_clean]
>>> [  486.876191]   time_stamp           <10000b439>
>>> [  486.876191]   next_to_watch        <a6>
>>> [  486.876191]   jiffies              <10000b67f>
>>> [  486.876191]   next_to_watch.status <0>
>>
>> This looks like an e1000 driver/hardware problem.  It doesn't
>> immediately look Xen specific.
>>
>> David
> 
> Hi David et al,
> 
> Many thanks for your replies and for your time spent looking into this 
> problem.  
> 
> I probably should have mentioned in my report that when BIOS settings:
> 
>     Intel(R) Virtualization Technology
>     Intel(R) VT for Directed I/O
>     
> are enabled, creating VMs that do not use a pci passthrough device 
> works just fine.  It's only when I try and create a VM that does use a 
> pci passthrough device that the crash happens.  Whether this is 
> relevant or not I'm not competent enough to say.

It's not obviously related since it's a device/driver in dom0 that's
broken.  I would suggest asking the e1000 maintainers if the Tx Unit
Hang debug output hints at anything.

David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-05-21  9:42         ` David Vrabel
@ 2014-05-22  9:47           ` Konrad Rzeszutek Wilk
       [not found]           ` <20140522094744.GA8264@localhost.localdomain>
  1 sibling, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-22  9:47 UTC (permalink / raw)
  To: David Vrabel, e1000-devel, jeffrey.t.kirsher, jesse.brandeburg,
	bruce.w.allan, carolyn.wyborny, donald.c.skidmore,
	gregory.v.rose, alexander.h.duyck, john.ronciak,
	mitch.a.williams
  Cc: Jo Mills, Ian Campbell, xen-devel

On Wed, May 21, 2014 at 10:42:30AM +0100, David Vrabel wrote:
> On 20/05/14 19:41, Jo Mills wrote:
> > On Tue, May 20, 2014 at 05:20:10PM +0100, David Vrabel wrote:
> >> On 20/05/14 11:18, Ian Campbell wrote:
> >>> On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
> >>>> On 17/05/14 18:46, Jo Mills wrote:
> >>>>>
> >>>>>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
> >>>>>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
> >>>>>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> >>>>
> >>>> We need the full backtrace from the kernel.
> >>>
> >>> It was in the attached serial console log. I've pasted what I think is
> >>> the relevant bit below.
> >>
> >> Ah. I hadn't noticed the attachment.
> >>
> >>> [  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
> >>> [  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
> >>> [  486.876191]   Tx Queue             <0>
> >>> [  486.876191]   TDH                  <a5>
> >>> [  486.876191]   TDT                  <b5>
> >>> [  486.876191]   next_to_use          <b5>
> >>> [  486.876191]   next_to_clean        <a5>
> >>> [  486.876191] buffer_info[next_to_clean]
> >>> [  486.876191]   time_stamp           <10000b439>
> >>> [  486.876191]   next_to_watch        <a6>
> >>> [  486.876191]   jiffies              <10000b67f>
> >>> [  486.876191]   next_to_watch.status <0>
> >>
> >> This looks like an e1000 driver/hardware problem.  It doesn't
> >> immediately look Xen specific.
> >>
> >> David
> > 
> > Hi David et al,
> > 
> > Many thanks for your replies and for your time spent looking into this 
> > problem.  
> > 
> > I probably should have mentioned in my report that when BIOS settings:
> > 
> >     Intel(R) Virtualization Technology
> >     Intel(R) VT for Directed I/O
> >     
> > are enabled, creating VMs that do not use a pci passthrough device 
> > works just fine.  It's only when I try and create a VM that does use a 
> > pci passthrough device that the crash happens.  Whether this is 
> > relevant or not I'm not competent enough to say.
> 
> It's not obviously related since it's a device/driver in dom0 that's
> broken.  I would suggest asking the e1000 maintainers if the Tx Unit
> Hang debug output hints at anything.

Lets CC them.

The thread is also available at:
http://lists.xen.org/archives/html/xen-devel/2014-05/msg02259.html

Thank you!
> 
> David
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
       [not found]           ` <20140522094744.GA8264@localhost.localdomain>
@ 2014-05-22 10:03             ` Jo Mills
       [not found]             ` <20140522100327.GG7332@white.maniscorse>
  1 sibling, 0 replies; 15+ messages in thread
From: Jo Mills @ 2014-05-22 10:03 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: alexander.h.duyck, Ian Campbell, e1000-devel, donald.c.skidmore,
	mitch.a.williams, bruce.w.allan, jesse.brandeburg, xen-devel,
	gregory.v.rose, john.ronciak, David Vrabel, carolyn.wyborny,
	jeffrey.t.kirsher

On Thu, May 22, 2014 at 05:47:45AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, May 21, 2014 at 10:42:30AM +0100, David Vrabel wrote:
> > On 20/05/14 19:41, Jo Mills wrote:
> > > On Tue, May 20, 2014 at 05:20:10PM +0100, David Vrabel wrote:
> > >> On 20/05/14 11:18, Ian Campbell wrote:
> > >>> On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
> > >>>> On 17/05/14 18:46, Jo Mills wrote:
> > >>>>>
> > >>>>>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
> > >>>>>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
> > >>>>>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> > >>>>
> > >>>> We need the full backtrace from the kernel.
> > >>>
> > >>> It was in the attached serial console log. I've pasted what I think is
> > >>> the relevant bit below.
> > >>
> > >> Ah. I hadn't noticed the attachment.
> > >>
> > >>> [  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
> > >>> [  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
> > >>> [  486.876191]   Tx Queue             <0>
> > >>> [  486.876191]   TDH                  <a5>
> > >>> [  486.876191]   TDT                  <b5>
> > >>> [  486.876191]   next_to_use          <b5>
> > >>> [  486.876191]   next_to_clean        <a5>
> > >>> [  486.876191] buffer_info[next_to_clean]
> > >>> [  486.876191]   time_stamp           <10000b439>
> > >>> [  486.876191]   next_to_watch        <a6>
> > >>> [  486.876191]   jiffies              <10000b67f>
> > >>> [  486.876191]   next_to_watch.status <0>
> > >>
> > >> This looks like an e1000 driver/hardware problem.  It doesn't
> > >> immediately look Xen specific.
> > >>
> > >> David
> > > 
> > > Hi David et al,
> > > 
> > > Many thanks for your replies and for your time spent looking into this 
> > > problem.  
> > > 
> > > I probably should have mentioned in my report that when BIOS settings:
> > > 
> > >     Intel(R) Virtualization Technology
> > >     Intel(R) VT for Directed I/O
> > >     
> > > are enabled, creating VMs that do not use a pci passthrough device 
> > > works just fine.  It's only when I try and create a VM that does use a 
> > > pci passthrough device that the crash happens.  Whether this is 
> > > relevant or not I'm not competent enough to say.
> > 
> > It's not obviously related since it's a device/driver in dom0 that's
> > broken.  I would suggest asking the e1000 maintainers if the Tx Unit
> > Hang debug output hints at anything.
> 
> Lets CC them.
> 
> The thread is also available at:
> http://lists.xen.org/archives/html/xen-devel/2014-05/msg02259.html
> 
> Thank you!


Hi Konrad et al,

I mailed e1000-devel@lists.sourceforge.net last night:

    Date: Wed, 21 May 2014 21:18:26 +0100
    To: e1000-devel <e1000-devel@lists.sourceforge.net>
    User-Agent: Mutt/1.5.21 (2010-09-15)
    From: Jo Mills <jo@maniscorse.co.uk>
    Subject: linux-image-3.13-1-amd64  3.13.10-1, Intel M/B, \
                    I/O virt. enabled, start vm -> e1000 "Tx Unit Hang"

with the same information as I raised with xen-devel.  I did think 
carefully about CC'ing xen-devel, but decided it probably wasn't 
reasonable to include yet another copy of my bug report here. I was 
going to update the xen-devel list when I heard back from the e1000 
maintainers. 

Thanks again for your interest and support,

Best regards,

Jo.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
       [not found]             ` <20140522100327.GG7332@white.maniscorse>
@ 2014-05-23 15:50               ` Konrad Rzeszutek Wilk
       [not found]               ` <20140523155015.GC5209@phenom.dumpdata.com>
  1 sibling, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-23 15:50 UTC (permalink / raw)
  To: Jo Mills
  Cc: alexander.h.duyck, Ian Campbell, e1000-devel, donald.c.skidmore,
	mitch.a.williams, bruce.w.allan, jesse.brandeburg, xen-devel,
	gregory.v.rose, john.ronciak, David Vrabel, carolyn.wyborny,
	jeffrey.t.kirsher

On Thu, May 22, 2014 at 11:03:27AM +0100, Jo Mills wrote:
> On Thu, May 22, 2014 at 05:47:45AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, May 21, 2014 at 10:42:30AM +0100, David Vrabel wrote:
> > > On 20/05/14 19:41, Jo Mills wrote:
> > > > On Tue, May 20, 2014 at 05:20:10PM +0100, David Vrabel wrote:
> > > >> On 20/05/14 11:18, Ian Campbell wrote:
> > > >>> On Tue, 2014-05-20 at 11:14 +0100, David Vrabel wrote:
> > > >>>> On 17/05/14 18:46, Jo Mills wrote:
> > > >>>>>
> > > >>>>>     [  498.336430] ---[ end trace b9630577ecf84cd8 ]---
> > > >>>>>     [  498.340053] Kernel panic - not syncing: Fatal exception in interrupt
> > > >>>>>     (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> > > >>>>
> > > >>>> We need the full backtrace from the kernel.
> > > >>>
> > > >>> It was in the attached serial console log. I've pasted what I think is
> > > >>> the relevant bit below.
> > > >>
> > > >> Ah. I hadn't noticed the attachment.
> > > >>
> > > >>> [  485.048066] d-con vm-13-disk2: PingAck did not arrive in time.
> > > >>> [  486.876191] e1000 0000:04:02.0 eth2: Detected Tx Unit Hang
> > > >>> [  486.876191]   Tx Queue             <0>
> > > >>> [  486.876191]   TDH                  <a5>
> > > >>> [  486.876191]   TDT                  <b5>
> > > >>> [  486.876191]   next_to_use          <b5>
> > > >>> [  486.876191]   next_to_clean        <a5>
> > > >>> [  486.876191] buffer_info[next_to_clean]
> > > >>> [  486.876191]   time_stamp           <10000b439>
> > > >>> [  486.876191]   next_to_watch        <a6>
> > > >>> [  486.876191]   jiffies              <10000b67f>
> > > >>> [  486.876191]   next_to_watch.status <0>
> > > >>
> > > >> This looks like an e1000 driver/hardware problem.  It doesn't
> > > >> immediately look Xen specific.
> > > >>
> > > >> David
> > > > 
> > > > Hi David et al,
> > > > 
> > > > Many thanks for your replies and for your time spent looking into this 
> > > > problem.  
> > > > 
> > > > I probably should have mentioned in my report that when BIOS settings:
> > > > 
> > > >     Intel(R) Virtualization Technology
> > > >     Intel(R) VT for Directed I/O
> > > >     
> > > > are enabled, creating VMs that do not use a pci passthrough device 
> > > > works just fine.  It's only when I try and create a VM that does use a 
> > > > pci passthrough device that the crash happens.  Whether this is 
> > > > relevant or not I'm not competent enough to say.
> > > 
> > > It's not obviously related since it's a device/driver in dom0 that's
> > > broken.  I would suggest asking the e1000 maintainers if the Tx Unit
> > > Hang debug output hints at anything.
> > 
> > Lets CC them.
> > 
> > The thread is also available at:
> > http://lists.xen.org/archives/html/xen-devel/2014-05/msg02259.html
> > 
> > Thank you!
> 
> 
> Hi Konrad et al,

Hey!
> 
> I mailed e1000-devel@lists.sourceforge.net last night:
> 
>     Date: Wed, 21 May 2014 21:18:26 +0100
>     To: e1000-devel <e1000-devel@lists.sourceforge.net>
>     User-Agent: Mutt/1.5.21 (2010-09-15)
>     From: Jo Mills <jo@maniscorse.co.uk>
>     Subject: linux-image-3.13-1-amd64  3.13.10-1, Intel M/B, \
>                     I/O virt. enabled, start vm -> e1000 "Tx Unit Hang"
> 
> with the same information as I raised with xen-devel.  I did think 
> carefully about CC'ing xen-devel, but decided it probably wasn't 
> reasonable to include yet another copy of my bug report here. I was 
> going to update the xen-devel list when I heard back from the e1000 
> maintainers. 
> 

Oh!
> Thanks again for your interest and support,

Sure, just want to make sure we get to the bottom of this.

> 
> Best regards,
> 
> Jo.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
       [not found]               ` <20140523155015.GC5209@phenom.dumpdata.com>
@ 2014-06-01 16:18                 ` Jo Mills
  2014-06-02  9:34                   ` David Vrabel
  0 siblings, 1 reply; 15+ messages in thread
From: Jo Mills @ 2014-06-01 16:18 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: alexander.h.duyck, Ian Campbell, donald.c.skidmore,
	mitch.a.williams, bruce.w.allan, jesse.brandeburg, xen-devel,
	gregory.v.rose, john.ronciak, David Vrabel, carolyn.wyborny,
	jeffrey.t.kirsher

Hi Konrad et al,

I have had no reply from the e1000-devel list about my "e1000 Tx Hang" 
problem, but I may have stumbled on something relevant.

Today I created a new VM (wheezy) running on a Wheezy dom0 - my node 
blue in the previous e-mail trail.  I decided to use the e1000e device 
at 0000:01:00.0 via pciback as this device is currently free (I'm 
building a replacement for an out of date DMZ VM). 


Just to recap, dom0 has the following devices:

    eth0 via-rhine 0000:04:00.0 assigned for zone LOC xenbr0

    eth1 via-rhine 0000:04:01.0 assigned for zone DMZ (pci-passthrough)

    eth2 e1000 0000:04:02.0 used for DBRB

    eth3 e1000e 0000:01:00.0 planned for Windows client domU

eth3 is the one I have "pinched" to build my new DMZ domu.

When I create this domu (which uses a DRBD device) I get the following 
error (copied from dmesg) and eth0 in the domu is clearly unstable.

[    2.703909] input: PC Speaker as /devices/platform/pcspkr/input/input0
[    2.780858] e1000e: Intel(R) PRO/1000 Network Driver - 2.3.2-k
[    2.780868] e1000e: Copyright(c) 1999 - 2013 Intel Corporation.
[    2.781119] e1000e 0000:00:00.0: enabling device (0000 -> 0002)
[    2.781500] e1000e 0000:00:00.0: Xen PCI mapped GSI16 to IRQ26
[    2.781933] e1000e 0000:00:00.0: setting latency timer to 64
[    2.782684] Error: Driver 'pcspkr' is already registered, aborting...
[    2.784207] e1000e 0000:00:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    2.895529] e1000e 0000:00:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 68:05:ca:21:80:2c
[    2.895542] e1000e 0000:00:00.0: eth0: Intel(R) PRO/1000 Network Connection
[    2.895557] e1000e 0000:00:00.0: eth0: MAC: 3, PHY: 8, PBA No: E46981-008
[    3.153496] Adding 524284k swap on /dev/xvda1.  Priority:-1 extents:1 across:524284k SS
[    3.531185] EXT3-fs (xvda2): using internal journal
[    5.026608] ADDRCONF(NETDEV_UP): eth0: link is not ready
[    8.121021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[    8.121454] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   13.824073] ------------[ cut here ]------------
[   13.824087] WARNING: at /build/linux-X2rDfB/linux-3.2.57/net/sched/sch_generic.c:256 dev_watchdog+0xf2/0x151()
[   13.824099] NETDEV WATCHDOG: eth0 (e1000e): transmit queue 0 timed out
[   13.824104] Modules linked in: evdev e1000e snd_pcm snd_page_alloc snd_timer snd soundcore pcspkr xen_pcifront coretemp ext3 mbcache jbd xen_blkfront
[   13.824130] Pid: 0, comm: swapper/0 Not tainted 3.2.0-4-amd64 #1 Debian 3.2.57-3
[   13.824138] Call Trace:
[   13.824141]  <IRQ>  [<ffffffff81046cd9>] ? warn_slowpath_common+0x78/0x8c
[   13.824157]  [<ffffffff81046d85>] ? warn_slowpath_fmt+0x45/0x4a
[   13.824163]  [<ffffffff812a7705>] ? netif_tx_lock+0x40/0x75
[   13.824171]  [<ffffffff812a7875>] ? dev_watchdog+0xf2/0x151
[   13.824179]  [<ffffffff810524f8>] ? run_timer_softirq+0x19a/0x261
[   13.824186]  [<ffffffff8109124c>] ? handle_irq_event_percpu+0x15f/0x17d
[   13.824194]  [<ffffffff812a7783>] ? netif_tx_unlock+0x49/0x49
[   13.824203]  [<ffffffff8104c36e>] ? __do_softirq+0xb9/0x177
[   13.824209]  [<ffffffff8121c0bd>] ? __xen_evtchn_do_upcall+0x24a/0x287
[   13.824219]  [<ffffffff81356c6c>] ? call_softirq+0x1c/0x30
[   13.824227]  [<ffffffff8100fa21>] ? do_softirq+0x3c/0x7b
[   13.824233]  [<ffffffff8104c5d6>] ? irq_exit+0x3c/0x99
[   13.824240]  [<ffffffff8121d47d>] ? xen_evtchn_do_upcall+0x27/0x32
[   13.824249]  [<ffffffff81356cbe>] ? xen_do_hypervisor_callback+0x1e/0x30
[   13.824254]  <EOI>  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[   13.824264]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[   13.824273]  [<ffffffff8100675a>] ? xen_safe_halt+0xc/0x13
[   13.824279]  [<ffffffff81014614>] ? default_idle+0x47/0x7f
[   13.824286]  [<ffffffff8100d24c>] ? cpu_idle+0xaf/0xf2
[   13.824294]  [<ffffffff816abb36>] ? start_kernel+0x3b8/0x3c3
[   13.824301]  [<ffffffff816ad4df>] ? xen_start_kernel+0x412/0x418
[   13.824308] ---[ end trace c3ec188c56467b6a ]---
[   13.824328] e1000e 0000:00:00.0: eth0: Reset adapter unexpectedly
[   17.613023] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   18.736090] eth0: no IPv6 routers present
[   22.832151] e1000e 0000:00:00.0: eth0: Reset adapter unexpectedly
[   26.741021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   71.824169] e1000e 0000:00:00.0: eth0: Reset adapter unexpectedly
[   75.645020] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[  365.824077] e1000e 0000:00:00.0: eth0: Reset adapter unexpectedly
[  369.629024] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[  379.824070] e1000e 0000:00:00.0: eth0: Reset adapter unexpectedly
[  383.709023] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[  393.824070] e1000e 0000:00:00.0: eth0: Reset adapter unexpectedly
[  397.657026] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None


Below is the vm config file in case there is anything useful in there.

    #
    # Configuration file for the Xen instance vm-server-22.maniscorse, created
    # by xen-tools 4.3.1 on Sun Jun  1 15:31:46 2014.
    #
    
    #
    #  Kernel + memory size
    #
    
    
    #bootloader = '/usr/lib/xen-default/bin/pygrub'
    bootloader = 'pygrub'
    
    vcpus       = '1'
    memory      = '1024'
    
    #
    #  Disk device(s).
    #
    root        = '/dev/xvda2 ro'
    
    #disk        = [
    #                  'phy:/dev/blue/vm-server-22.maniscorse-disk,xvda2,w',
    #                  'phy:/dev/blue/vm-server-22.maniscorse-swap,xvda1,w',
    #              ]
    
    #
    # Add support for drbd
    #
    disk         = [
                    'drbd:vm-22-disk,xvda2,w',
                    'phy:/dev/blue/vm-server-22.maniscorse-swap,xvda1,w',
                ]
    
    
    #
    #  Physical volumes
    #
    
    
    #
    #  Hostname
    #
    name        = 'vm-server-22.maniscorse'
    
    #
    #  Networking
    #
    #vif         = [ 'ip=192.168.2.222 ,mac=00:16:3e:de:02:00' ]
    #
    #
    # Add support for looped through pci NIC
    # (Same device number on both blue and green)
    #
    pci = [ '01:00.0' ]
    
    
    #
    #  Behaviour
    #
    on_poweroff = 'destroy'
    on_reboot   = 'restart'
    on_crash    = 'restart'



Is it possible that there is some horrible interaction between the 
e1000 device and the e1000e device? It just seems quite a co-incidence.

Best regards,

Jo.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-06-01 16:18                 ` Jo Mills
@ 2014-06-02  9:34                   ` David Vrabel
  2014-06-02 18:25                     ` Jo Mills
  0 siblings, 1 reply; 15+ messages in thread
From: David Vrabel @ 2014-06-02  9:34 UTC (permalink / raw)
  To: Jo Mills, Konrad Rzeszutek Wilk
  Cc: alexander.h.duyck, Ian Campbell, donald.c.skidmore,
	mitch.a.williams, bruce.w.allan, jesse.brandeburg, xen-devel,
	gregory.v.rose, john.ronciak, David Vrabel, carolyn.wyborny,
	jeffrey.t.kirsher

On 01/06/14 17:18, Jo Mills wrote:
> Hi Konrad et al,
> 
> I have had no reply from the e1000-devel list about my "e1000 Tx Hang" 
> problem, but I may have stumbled on something relevant.
> 
> Today I created a new VM (wheezy) running on a Wheezy dom0 - my node 
> blue in the previous e-mail trail.  I decided to use the e1000e device 
> at 0000:01:00.0 via pciback as this device is currently free (I'm 
> building a replacement for an out of date DMZ VM).

Can you check you have 4704fe4f03a5ab27e3c36184af85d5000e0f8a48
(xen/events: mask events when changing their VCPU binding) in your domU
kernel?

David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-06-02  9:34                   ` David Vrabel
@ 2014-06-02 18:25                     ` Jo Mills
  2014-06-08 15:15                       ` Jo Mills
  0 siblings, 1 reply; 15+ messages in thread
From: Jo Mills @ 2014-06-02 18:25 UTC (permalink / raw)
  To: David Vrabel
  Cc: alexander.h.duyck, Ian Campbell, donald.c.skidmore,
	mitch.a.williams, bruce.w.allan, jesse.brandeburg, xen-devel,
	gregory.v.rose, john.ronciak, jeffrey.t.kirsher, carolyn.wyborny

On Mon, Jun 02, 2014 at 10:34:51AM +0100, David Vrabel wrote:
> On 01/06/14 17:18, Jo Mills wrote:
> > Hi Konrad et al,
> >
> > I have had no reply from the e1000-devel list about my "e1000 Tx Hang"
> > problem, but I may have stumbled on something relevant.
> >
> > Today I created a new VM (wheezy) running on a Wheezy dom0 - my node
> > blue in the previous e-mail trail.  I decided to use the e1000e device
> > at 0000:01:00.0 via pciback as this device is currently free (I'm
> > building a replacement for an out of date DMZ VM).
>
> Can you check you have 4704fe4f03a5ab27e3c36184af85d5000e0f8a48
> (xen/events: mask events when changing their VCPU binding) in your domU
> kernel?
>
> David
>


Hi David,

Many thanks for the quick reply.  To the best of my ability I have
checked and I believe the above is in the domU kernel.


root@vm-server-22:~# cat /proc/version
  Linux version 3.2.0-4-amd64 (debian-kernel@lists.debian.org) \
    (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.57-3+deb7u1


root@white:~# dpkg -l linux-source-3.2
  ii  linux-source-3.2                   3.2.57-3+deb7u1        all


and then from the above sources, if I look in
linux-source-3.2/drivers/xen/events.c I can see:
    .
    .
    .

/* Rebind an evtchn so that it gets delivered to a specific cpu */
static int rebind_irq_to_cpu(unsigned irq, unsigned tcpu)
{
	struct shared_info *s = HYPERVISOR_shared_info;
	struct evtchn_bind_vcpu bind_vcpu;
	int evtchn = evtchn_from_irq(irq);
	int masked;

	if (!VALID_EVTCHN(evtchn))
		return -1;

	/*
	 * Events delivered via platform PCI interrupts are always
	 * routed to vcpu 0 and hence cannot be rebound.
	 */
	if (xen_hvm_domain() && !xen_have_vector_callback)
		return -1;

	/* Send future instances of this interrupt to other vcpu. */
	bind_vcpu.port = evtchn;
	bind_vcpu.vcpu = tcpu;

	/*
	 * Mask the event while changing the VCPU binding to prevent
	 * it being delivered on an unexpected VCPU.
	 */
	masked = sync_test_and_set_bit(evtchn, s->evtchn_mask);

	/*
	 * If this fails, it usually just indicates that we're dealing with a
	 * virq or IPI channel, which don't actually need to be rebound. Ignore
	 * it, but don't do the xenlinux-level rebind in that case.
	 */
	if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
		bind_evtchn_to_cpu(evtchn, tcpu);

	if (!masked)
		unmask_evtchn(evtchn);

	return 0;
}


Best regards,

Jo.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic
  2014-06-02 18:25                     ` Jo Mills
@ 2014-06-08 15:15                       ` Jo Mills
  0 siblings, 0 replies; 15+ messages in thread
From: Jo Mills @ 2014-06-08 15:15 UTC (permalink / raw)
  To: David Vrabel
  Cc: alexander.h.duyck, Ian Campbell, donald.c.skidmore,
	mitch.a.williams, bruce.w.allan, jesse.brandeburg, xen-devel,
	gregory.v.rose, john.ronciak, jeffrey.t.kirsher, carolyn.wyborny

Hi David et al,

Just to try something, I swapped the e1000e NIC for an r8169 NIC and 
tried again.  The realtek device did not bounce up and down like the 
Intel one did, and although ifconfig said the device was up and the 
link LED on the card showed a connection, device seemed to be utterly 
dead (no reply to/from ping).  Then I got the same crash as with the 
Intel card: 

root@vm-server-22:~# [ 1057.824018] ------------[ cut here ]------------
[ 1057.824033] WARNING: at /build/linux-5U_ZPM/linux-3.2.57/net/sched/sch_generic.c:256 dev_watchdog+0xf2/0x151()
[ 1057.824041] NETDEV WATCHDOG: eth0 (r8169): transmit queue 0 timed out
[ 1057.824049] Modules linked in: r8169 mii evdev snd_pcm snd_page_alloc snd_timer snd soundcore pcspkr coretemp xen_pcifront ext3 mbcache jbd dm_mod xen_blkfront
[ 1057.824089] Pid: 0, comm: swapper/0 Not tainted 3.2.0-4-amd64 #1 Debian 3.2.57-3+deb7u2
[ 1057.824098] Call Trace:
[ 1057.824103]  <IRQ>  [<ffffffff81046cd9>] ? warn_slowpath_common+0x78/0x8c
[ 1057.824118]  [<ffffffff81046d85>] ? warn_slowpath_fmt+0x45/0x4a
[ 1057.824127]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[ 1057.824135]  [<ffffffff812a7839>] ? netif_tx_lock+0x40/0x75
[ 1057.824143]  [<ffffffff812a79a9>] ? dev_watchdog+0xf2/0x151
[ 1057.824150]  [<ffffffff810524f8>] ? run_timer_softirq+0x19a/0x261
[ 1057.824158]  [<ffffffff812a78b7>] ? netif_tx_unlock+0x49/0x49
[ 1057.824167]  [<ffffffff8109138a>] ? handle_irq_event+0x40/0x52
[ 1057.824176]  [<ffffffff8104c36e>] ? __do_softirq+0xb9/0x177
[ 1057.824185]  [<ffffffff8121c1bd>] ? __xen_evtchn_do_upcall+0x24a/0x287
[ 1057.824194]  [<ffffffff81356e6c>] ? call_softirq+0x1c/0x30
[ 1057.824203]  [<ffffffff8100fa21>] ? do_softirq+0x3c/0x7b
[ 1057.824208]  [<ffffffff8104c5d6>] ? irq_exit+0x3c/0x99
[ 1057.824217]  [<ffffffff8121d57d>] ? xen_evtchn_do_upcall+0x27/0x32
[ 1057.824225]  [<ffffffff81356ebe>] ? xen_do_hypervisor_callback+0x1e/0x30
[ 1057.824233]  <EOI>  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[ 1057.824243]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[ 1057.824252]  [<ffffffff8100675a>] ? xen_safe_halt+0xc/0x13
[ 1057.824258]  [<ffffffff81014614>] ? default_idle+0x47/0x7f
[ 1057.824265]  [<ffffffff8100d24c>] ? cpu_idle+0xaf/0xf2
[ 1057.824274]  [<ffffffff816abb36>] ? start_kernel+0x3b8/0x3c3
[ 1057.824282]  [<ffffffff816ad4df>] ? xen_start_kernel+0x412/0x418
[ 1057.824289] ---[ end trace eee1d2411f3779ca ]---
[ 1057.842587] r8169 0000:00:00.0: eth0: link up


Oh hum,

Any suggestions etc. all very gratefully received,

Best regards,

Jo.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-06-08 15:15 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-17 17:46 xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic Jo Mills
2014-05-19 10:58 ` Jan Beulich
2014-05-19 11:03   ` Ian Campbell
2014-05-20 10:14 ` David Vrabel
2014-05-20 10:18   ` Ian Campbell
2014-05-20 16:20     ` David Vrabel
2014-05-20 18:41       ` Jo Mills
2014-05-21  9:42         ` David Vrabel
2014-05-22  9:47           ` Konrad Rzeszutek Wilk
     [not found]           ` <20140522094744.GA8264@localhost.localdomain>
2014-05-22 10:03             ` Jo Mills
     [not found]             ` <20140522100327.GG7332@white.maniscorse>
2014-05-23 15:50               ` Konrad Rzeszutek Wilk
     [not found]               ` <20140523155015.GC5209@phenom.dumpdata.com>
2014-06-01 16:18                 ` Jo Mills
2014-06-02  9:34                   ` David Vrabel
2014-06-02 18:25                     ` Jo Mills
2014-06-08 15:15                       ` Jo Mills

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.