All of lore.kernel.org
 help / color / mirror / Atom feed
* Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
@ 2016-08-25 20:21 linux
  2016-08-25 20:34 ` Doug Goldstein
  0 siblings, 1 reply; 24+ messages in thread
From: linux @ 2016-08-25 20:21 UTC (permalink / raw)
  To: xen-devel

Today i tried to switch some of my HVM guests (qemu-xen) from booting of 
a kernel *inside* the guest, to a dom0 supplied kernel, which is 
described as "Direct Kernel Boot" here: 
https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html :

     Direct Kernel Boot

     Direct kernel boot allows booting directly from a kernel and initrd 
stored in the host physical
     machine OS, allowing command line arguments to be passed directly. 
PV guest direct kernel boot
     is supported. HVM guest direct kernel boot is supported with 
limitation (it's supported when
     using qemu-xen and default BIOS 'seabios'; not supported in case of 
stubdom-dm and old rombios.)

     kernel="PATHNAME"    Load the specified file as the kernel image.
     ramdisk="PATHNAME"   Load the specified file as the ramdisk.

But qemu fails to start, output appended below.

I tested with:
- current Xen-unstable, which fails.
- xen-stable-4.7.0 release, which fails.
- xen-stable-4.6.0 release, works fine.

So it's a regression somewhere between 4.6.0 and 4.7.0, but hopefully 
someone has a hunch before trying to do a whole bisect between those two 
releases.

--
Sander

 From the qemu log:

qemu: hardware error: xen: failed to populate ram at 40050000
CPU #0:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 
XMM07=00000000000000000000000000000000
CPU #1:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 
XMM07=00000000000000000000000000000000
CPU #2:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 
XMM07=00000000000000000000000000000000
CPU #3:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 
XMM07=00000000000000000000000000000000


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-08-25 20:21 Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore linux
@ 2016-08-25 20:34 ` Doug Goldstein
  2016-08-25 21:18   ` linux
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Goldstein @ 2016-08-25 20:34 UTC (permalink / raw)
  To: linux, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 1268 bytes --]

On 8/25/16 4:21 PM, linux@eikelenboom.it wrote:
> Today i tried to switch some of my HVM guests (qemu-xen) from booting of
> a kernel *inside* the guest, to a dom0 supplied kernel, which is
> described as "Direct Kernel Boot" here:
> https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html :
> 
>     Direct Kernel Boot
> 
>     Direct kernel boot allows booting directly from a kernel and initrd
> stored in the host physical
>     machine OS, allowing command line arguments to be passed directly.
> PV guest direct kernel boot
>     is supported. HVM guest direct kernel boot is supported with
> limitation (it's supported when
>     using qemu-xen and default BIOS 'seabios'; not supported in case of
> stubdom-dm and old rombios.)
> 
>     kernel="PATHNAME"    Load the specified file as the kernel image.
>     ramdisk="PATHNAME"   Load the specified file as the ramdisk.
> 
> But qemu fails to start, output appended below.
> 
> I tested with:
> - current Xen-unstable, which fails.
> - xen-stable-4.7.0 release, which fails.
> - xen-stable-4.6.0 release, works fine.

Can you include the logs from xl dmesg around that time frame as well?
Just wondering how much RAM you're domain is defined with as well?

-- 
Doug Goldstein


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 959 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-08-25 20:34 ` Doug Goldstein
@ 2016-08-25 21:18   ` linux
  2016-08-26 10:19     ` Håkon Alstadheim
  2016-09-05  9:20     ` linux
  0 siblings, 2 replies; 24+ messages in thread
From: linux @ 2016-08-25 21:18 UTC (permalink / raw)
  To: Doug Goldstein; +Cc: xen-devel

On 2016-08-25 22:34, Doug Goldstein wrote:
> On 8/25/16 4:21 PM, linux@eikelenboom.it wrote:
>> Today i tried to switch some of my HVM guests (qemu-xen) from booting 
>> of
>> a kernel *inside* the guest, to a dom0 supplied kernel, which is
>> described as "Direct Kernel Boot" here:
>> https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html :
>> 
>>     Direct Kernel Boot
>> 
>>     Direct kernel boot allows booting directly from a kernel and 
>> initrd
>> stored in the host physical
>>     machine OS, allowing command line arguments to be passed directly.
>> PV guest direct kernel boot
>>     is supported. HVM guest direct kernel boot is supported with
>> limitation (it's supported when
>>     using qemu-xen and default BIOS 'seabios'; not supported in case 
>> of
>> stubdom-dm and old rombios.)
>> 
>>     kernel="PATHNAME"    Load the specified file as the kernel image.
>>     ramdisk="PATHNAME"   Load the specified file as the ramdisk.
>> 
>> But qemu fails to start, output appended below.
>> 
>> I tested with:
>> - current Xen-unstable, which fails.
>> - xen-stable-4.7.0 release, which fails.
>> - xen-stable-4.6.0 release, works fine.
> 
> Can you include the logs from xl dmesg around that time frame as well?

Ah i thought there wasn't any, but didn't check thoroughly or wasn't 
there
since the release builds are non-debug by default.

However, back on xen-unstable:
(XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU
(XEN) [2016-08-25 21:09:15.172] HVM19 save: PIC
(XEN) [2016-08-25 21:09:15.172] HVM19 save: IOAPIC
(XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC
(XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC_REGS
(XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_IRQ
(XEN) [2016-08-25 21:09:15.172] HVM19 save: ISA_IRQ
(XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_LINK
(XEN) [2016-08-25 21:09:15.172] HVM19 save: PIT
(XEN) [2016-08-25 21:09:15.172] HVM19 save: RTC
(XEN) [2016-08-25 21:09:15.172] HVM19 save: HPET
(XEN) [2016-08-25 21:09:15.172] HVM19 save: PMTIMER
(XEN) [2016-08-25 21:09:15.172] HVM19 save: MTRR
(XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_DOMAIN
(XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU_XSAVE
(XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_VCPU
(XEN) [2016-08-25 21:09:15.172] HVM19 save: VMCE_VCPU
(XEN) [2016-08-25 21:09:15.172] HVM19 save: TSC_ADJUST
(XEN) [2016-08-25 21:09:15.172] HVM19 restore: CPU 0
(XEN) [2016-08-25 21:09:16.126] d0v1 Over-allocation for domain 19: 
262401 > 262400
(XEN) [2016-08-25 21:09:16.126] memory.c:213:d0v1 Could not allocate 
order=0 extent: id=19 memflags=0 (192 of 512)

Hmm some off by one issue ?


> Just wondering how much RAM you're domain is defined with as well?

1024 Mb, there is more than enough unallocated memory for xen to start 
the guest (and dom0 is fixed with dom0_mem=1536M,max:1536M and 
ballooning is off)

--
Sander





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-08-25 21:18   ` linux
@ 2016-08-26 10:19     ` Håkon Alstadheim
  2016-08-30 12:35       ` Wei Liu
  2016-09-05  9:20     ` linux
  1 sibling, 1 reply; 24+ messages in thread
From: Håkon Alstadheim @ 2016-08-26 10:19 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 16364 bytes --]

Den 25. aug. 2016 23:18, skrev linux@eikelenboom.it:
> On 2016-08-25 22:34, Doug Goldstein wrote:
>> On 8/25/16 4:21 PM, linux@eikelenboom.it wrote:
>>> Today i tried to switch some of my HVM guests (qemu-xen) from
>>> booting of
>>> a kernel *inside* the guest, to a dom0 supplied kernel, which is
>>> described as "Direct Kernel Boot" here:
>>> https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html :
>>>
>>>     Direct Kernel Boot
>>>
>>>     Direct kernel boot allows booting directly from a kernel and initrd
>>> stored in the host physical
>>>     machine OS, allowing command line arguments to be passed directly.
>>> PV guest direct kernel boot
>>>     is supported. HVM guest direct kernel boot is supported with
>>> limitation (it's supported when
>>>     using qemu-xen and default BIOS 'seabios'; not supported in case of
>>> stubdom-dm and old rombios.)
>>>
>>>     kernel="PATHNAME"    Load the specified file as the kernel image.
>>>     ramdisk="PATHNAME"   Load the specified file as the ramdisk.
>>>
>>> But qemu fails to start, output appended below.
>>>
>>> I tested with:
>>> - current Xen-unstable, which fails.
>>> - xen-stable-4.7.0 release, which fails.
>>> - xen-stable-4.6.0 release, works fine.
>>
>> Can you include the logs from xl dmesg around that time frame as well?
>
> Ah i thought there wasn't any, but didn't check thoroughly or wasn't
> there
> since the release builds are non-debug by default.
>
> However, back on xen-unstable:
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PIC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: IOAPIC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC_REGS
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_IRQ
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: ISA_IRQ
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_LINK
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PIT
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: RTC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: HPET
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PMTIMER
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: MTRR
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_DOMAIN
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU_XSAVE
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_VCPU
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: VMCE_VCPU
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: TSC_ADJUST
> (XEN) [2016-08-25 21:09:15.172] HVM19 restore: CPU 0
> (XEN) [2016-08-25 21:09:16.126] d0v1 Over-allocation for domain 19:
> 262401 > 262400
> (XEN) [2016-08-25 21:09:16.126] memory.c:213:d0v1 Could not allocate
> order=0 extent: id=19 memflags=0 (192 of 512)
>
> Hmm some off by one issue ?
>
>
>> Just wondering how much RAM you're domain is defined with as well?
>
> 1024 Mb, there is more than enough unallocated memory for xen to start
> the guest (and dom0 is fixed with dom0_mem=1536M,max:1536M and
> ballooning is off)
>
> -- 
> Sander
>
>
>

I've got the same issue, reported it in xen-users som time ago. I never
caught on that internal/external kernel would trigger it. I'll just
paste my entire message from xen-users below:
------

I have been trying for some time now to upgrade from Xen 4.6.* to 4.7.
Trying several different dom0 kernel versions, and jiggling the xl.cfg
files. All to no avail.

I am unable to launch most of my guests under 4.7, though they run fine
under 4.6 (except for some usb/pci-pass-though -related issues)  . As
seen from the device-model log below, qemu claims it is unable to
allocate ram: "qemu: hardware error: xen: failed to populate ram at
280050000", but I have plenty ram available, and this same VM (and many
more) launch fine under 4.6.*

I admit I am a rank amateur at this, so my config is probably pretty
weird, possibly leading to a set-up that nobody knowledgeable would run.
If somebody can give me a hint on how to work around this issue I'll
happily test patches and provide logs.

Example VM which does not start under 4.7. :

------xl.cfg for media.hvm (i pass pci-pass-through for usb-card on
command-line. Works OK)name = "media.hvm"
builder = "hvm"
xen_platform_pci = '1'
pvh=1
memory = 7168
mmio_hole=3072
vcpus = 6
cap=600
cpus_soft="node:0"
cpu_weight=6144
device_model_version="qemu-xen"
serial = 'pty'
disk = [ 'vdev=xvda, format=raw, target=/dev/system/media-backend'
        ,'vdev=xvdb, format=raw, target=/dev/system/media-backend-swap'
    ,'vdev=xvdd, format=raw, target=/dev/system/apub'
        ,'vdev=xvde, format=raw, target=/dev/system/apub1'
        ,'vdev=xvdf, format=raw, target=/dev/system/apub2'
        ,'vdev=xvdg, format=raw, target=/dev/system/apub3'
        ,'vdev=xvdh, format=raw, target=/dev/system/apub4'
        ,'vdev=xvdi, format=raw, target=/dev/system/apub5'
        ,'vdev=xvdj, format=raw, target=/dev/system/apub6'
        ,'vdev=xvdk, format=raw, target=/dev/system/apub7' ]
kernel = "/etc/xen/media-boot/vmlinuz-4.1.12-gentoo"
extra = "root=/dev/xvda intel_iommu=on console=ttyS0 console=vga
init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128"
vif = ['mac=02:16:3e:00:00:07,bridge=br0']
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
boot = 'd'
acpi = '1'
sdl = '0'
vnc = '1'
--------

This results in the following:

-----VM console log: ----

Parsing config from /etc/xen/media.hvm
libxl: error: libxl_dm.c:2187:device_model_spawn_outcome: domain 3
device model: spawn failed (rc=-3)
libxl: error: libxl_create.c:1422:domcreate_devmodel_started: device
model did not start: -3
libxl: error: libxl_dm.c:2301:kill_device_model: Device Model already exited
libxl: error: libxl.c:1583:libxl__destroy_domid: non-existant domain 3
libxl: error: libxl.c:1542:domain_destroy_callback: unable to destroy
guest with domid 3
libxl: error: libxl.c:1471:domain_destroy_cb: destruction of domain 3 failed

------ dom0 console: ---

(XEN) [2016-08-10 10:14:09] HVM3 save: CPU
(XEN) [2016-08-10 10:14:09] HVM3 save: PIC
(XEN) [2016-08-10 10:14:09] HVM3 save: IOAPIC
(XEN) [2016-08-10 10:14:09] HVM3 save: LAPIC
(XEN) [2016-08-10 10:14:09] HVM3 save: LAPIC_REGS
(XEN) [2016-08-10 10:14:09] HVM3 save: PCI_IRQ
(XEN) [2016-08-10 10:14:09] HVM3 save: ISA_IRQ
(XEN) [2016-08-10 10:14:09] HVM3 save: PCI_LINK
(XEN) [2016-08-10 10:14:09] HVM3 save: PIT
(XEN) [2016-08-10 10:14:09] HVM3 save: RTC
(XEN) [2016-08-10 10:14:09] HVM3 save: HPET
(XEN) [2016-08-10 10:14:09] HVM3 save: PMTIMER
(XEN) [2016-08-10 10:14:09] HVM3 save: MTRR
(XEN) [2016-08-10 10:14:09] HVM3 save: VIRIDIAN_DOMAIN
(XEN) [2016-08-10 10:14:09] HVM3 save: CPU_XSAVE
(XEN) [2016-08-10 10:14:09] HVM3 save: VIRIDIAN_VCPU
(XEN) [2016-08-10 10:14:09] HVM3 save: VMCE_VCPU
(XEN) [2016-08-10 10:14:09] HVM3 save: TSC_ADJUST
(XEN) [2016-08-10 10:14:09] HVM3 restore: CPU 0
(XEN) [2016-08-10 10:14:11] d0v0 Over-allocation for domain 3: 1835265 >
1835264
(XEN) [2016-08-10 10:14:11] memory.c:209:d0v0 Could not allocate order=0
extent: id=3 memflags=0 (192 of 512)

------------- xl info output: ---------

host                   : gentoo
release                : 4.1.29-gentoo
version                : #1 SMP Wed Aug 10 03:47:43 CEST 2016
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 2394
hw_caps                :
b7ebfbff:77fef3ff:2c100800:00000021:00000001:000037ab:00000000:00000100
virt_caps              : hvm hvm_directio
total_memory           : 65376
free_memory            : 47044
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 7
xen_extra              : .0
xen_version            : 4.7.0
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline        : ssd-xen-4.7-marker console_timestamps=date
loglvl=all guest_loglvl=all sync_console iommu=1,verbose,debug
iommu_inclusive_mapping=1 com1=115200,8n1 console=com1 dom0_max_vcpus=4
dom0_vcpus_pin=1 dom0_mem=7G,max:7G cpufreq=xen,performance,verbose
sched_smt_power_savings=1 apic_verbosity=debug e820-verbose=1
core_parking=power cpuidle=0
cc_compiler            : x86_64-pc-linux-gnu-gcc (Gentoo 5.4.0 p1.0,
pie-0.6.5) 5.4.0
cc_compile_by          :
cc_compile_domain      : alstadheim.priv.no
cc_compile_date        : Tue Aug  9 17:12:07 CEST 2016
build_id               : 124ae07d4d637e3a8dc4150d03008027ce5c4d54
xend_config_format     : 4

-------- device model log: -------

char device redirected to /dev/pts/8 (label serial0)
qemu: hardware error: xen: failed to populate ram at 280050000
CPU #0:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #1:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #2:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #3:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #4:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #5:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000663
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
-------

---

Regards, Håkon A.




[-- Attachment #1.2: Type: text/html, Size: 18497 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-08-26 10:19     ` Håkon Alstadheim
@ 2016-08-30 12:35       ` Wei Liu
  2016-08-30 22:13         ` Håkon Alstadheim
  0 siblings, 1 reply; 24+ messages in thread
From: Wei Liu @ 2016-08-30 12:35 UTC (permalink / raw)
  To: Håkon Alstadheim; +Cc: Wei Liu, xen-devel

Could you please use xl -vvv create to create the guest and collect the
output?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-08-30 12:35       ` Wei Liu
@ 2016-08-30 22:13         ` Håkon Alstadheim
  0 siblings, 0 replies; 24+ messages in thread
From: Håkon Alstadheim @ 2016-08-30 22:13 UTC (permalink / raw)
  To: Wei Liu; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 749 bytes --]

Den 30. aug. 2016 14:35, skrev Wei Liu:
> Could you please use xl -vvv create to create the guest and collect the
> output?
>
> Wei.
>
See xl-console-under-xen4.7-v1.log  xl-console-under-xen4.7-v2.log,
which are two "xl create" runs, right after each other (my system may
have been trying to start another vm in the mean-time) . You may also
see hypervisor console log attached. At 2016-08-30 21:14:21 my system
attempts automatic launch of "media.hvm", and outputs xl info to
/dev/hvc0 when media.hvm does not boot. Shortly there after I attempt a
manual start of media.hvm. Command input and dom0 console are not
attached here.

I would be happy to perform further tests if I am able, and thank you
for looking into this :-)

Regards, Håkon A.



[-- Attachment #2: xl-console-under-xen4.7-v1.log --]
[-- Type: text/x-log, Size: 47282 bytes --]

Parsing config from /etc/xen/media.hvm
libxl: debug: libxl_create.c:1710:do_domain_create: ao 0x63b600: create: how=(nil) callback=(nil) poller=0x63b080
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdb, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdd, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvde, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdf, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdg, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdh, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdi, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdj, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdk, using backend phy
libxl: debug: libxl_create.c:970:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63daa8: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda intel_iommu=on console=ttyS0 console=vga init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128", features="(null)"
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/libexec/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 757 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.7, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ... 
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xc6dc4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1c6dc4
domainbuilder: detail: xc_dom_mem_init: mem 9208 MB, pages 0x23f800 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x23f800 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: xc_dom_malloc            : 24560 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000200
xc: detail:   2MB PAGES: 0x00000000000003fb
xc: detail:   1GB PAGES: 0x0000000000000007
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0xc7 at 0x7f93e69c8000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x1c7000  (pfn 0x100 + 0xc7 pages)
xc: detail: elf_load_binary: phdr 0 at 0x7f93e1d53000 -> 0x7f93e1e10238
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x1c7000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: clear_page: pfn 0xfefff, mfn 0xfefff
domainbuilder: detail: clear_page: pfn 0xfeffc, mfn 0xfeffc
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 24567 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 757 kB
domainbuilder: detail:       domU mmap          : 796 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x2ff800
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x2ff801
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/3/51712/state token=3/0: register slotnum=3
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/3/51728/state token=2/1: register slotnum=2
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/3/51760/state token=1/2: register slotnum=1
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/3/51776/state token=0/3: register slotnum=0
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/3/51792/state token=19/4: register slotnum=19
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x648230 wpath=/local/domain/0/backend/vbd/3/51808/state token=18/5: register slotnum=18
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/3/51824/state token=17/6: register slotnum=17
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/3/51840/state token=16/7: register slotnum=16
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/3/51856/state token=15/8: register slotnum=15
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/3/51872/state token=14/9: register slotnum=14
libxl: debug: libxl_create.c:1736:do_domain_create: ao 0x63b600: inprogress: poller=0x63b080, flags=i
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/3/51712/state token=3/0: event epath=/local/domain/0/backend/vbd/3/51712/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51712/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/3/51712/state token=3/0: deregister slotnum=3
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63eb20: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51712/state token=3/0: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/3/51728/state token=2/1: event epath=/local/domain/0/backend/vbd/3/51728/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51728/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/3/51728/state token=2/1: deregister slotnum=2
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fa70: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51728/state token=2/1: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/3/51760/state token=1/2: event epath=/local/domain/0/backend/vbd/3/51760/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51760/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/3/51760/state token=1/2: deregister slotnum=1
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641db0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51760/state token=1/2: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/3/51776/state token=0/3: event epath=/local/domain/0/backend/vbd/3/51776/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51776/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/3/51776/state token=0/3: deregister slotnum=0
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643aa0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51776/state token=0/3: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/3/51792/state token=19/4: event epath=/local/domain/0/backend/vbd/3/51792/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51792/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/3/51792/state token=19/4: deregister slotnum=19
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6464b0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51792/state token=19/4: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x648230 wpath=/local/domain/0/backend/vbd/3/51808/state token=18/5: event epath=/local/domain/0/backend/vbd/3/51808/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51808/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x648230 wpath=/local/domain/0/backend/vbd/3/51808/state token=18/5: deregister slotnum=18
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648230: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51808/state token=18/5: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/3/51824/state token=17/6: event epath=/local/domain/0/backend/vbd/3/51824/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51824/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/3/51824/state token=17/6: deregister slotnum=17
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x649f00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51824/state token=17/6: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/3/51840/state token=16/7: event epath=/local/domain/0/backend/vbd/3/51840/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51840/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/3/51840/state token=16/7: deregister slotnum=16
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644c00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51840/state token=16/7: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/3/51856/state token=15/8: event epath=/local/domain/0/backend/vbd/3/51856/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51856/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/3/51856/state token=15/8: deregister slotnum=15
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f200: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51856/state token=15/8: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/3/51872/state token=14/9: event epath=/local/domain/0/backend/vbd/3/51872/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51872/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/3/51872/state token=14/9: deregister slotnum=14
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x650f00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51872/state token=14/9: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ec20: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ec20: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641eb0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641eb0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f300: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f300: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644d00: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644d00: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648330: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648330: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6465b0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6465b0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64a000: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64a000: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643ba0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643ba0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x651000: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x651000: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fb70: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fb70: deregister unregistered
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/media-backend
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/media-backend-swap
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub1
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub2
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub3
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub4
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub5
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub6
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub7
libxl: debug: libxl_dm.c:1498:libxl__build_device_model_args_new: Could not find user xen-qemuuser-shared, starting QEMU as root
libxl: debug: libxl_dm.c:2092:libxl__spawn_local_dm: Spawning device-model /usr/libexec/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   /usr/libexec/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   3
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/run/xen/qmp-libxl-3,server,nowait
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -no-shutdown
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   socket,id=libxenstat-cmd,path=/run/xen/qmp-libxenstat-3,server,nowait
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   chardev=libxenstat-cmd,mode=control
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -no-user-config
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   media.hvm
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   127.0.0.1:0,to=99
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -display
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   none
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -kernel
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   /etc/xen/media-boot/vmlinuz-4.1.15-gentoo-r1
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -append
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   root=/dev/xvda intel_iommu=on console=ttyS0 console=vga init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   cirrus-vga,vgamem_mb=8
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   order=d
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   6,maxcpus=6
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   rtl8139,id=nic0,netdev=net0,mac=02:16:3e:00:00:07
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif3.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   xenfv,max-ram-below-4g=1073741824
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   9208
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/media-backend,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/media-backend-swap,if=ide,index=1,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/apub,if=ide,index=3,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: Spawning device-model /usr/libexec/xen/bin/qemu-system-i386 with additional environment:
libxl: debug: libxl_dm.c:2098:libxl__spawn_local_dm:   XEN_QEMU_CONSOLE_LIMIT=1048576
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63dda0 wpath=/local/domain/0/device-model/3/state token=14/a: register slotnum=14
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63dda0 wpath=/local/domain/0/device-model/3/state token=14/a: event epath=/local/domain/0/device-model/3/state
libxl: debug: libxl_exec.c:398:spawn_watch_event: domain 3 device model: spawn watch p=(null)
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63dda0 wpath=/local/domain/0/device-model/3/state token=14/a: deregister slotnum=14
libxl: error: libxl_dm.c:2187:device_model_spawn_outcome: domain 3 device model: spawn failed (rc=-3)
libxl: error: libxl_create.c:1422:domcreate_devmodel_started: device model did not start: -3
libxl: error: libxl_dm.c:2301:kill_device_model: Device Model already exited
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/3/51712/state token=14/b: register slotnum=14
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/3/51728/state token=15/c: register slotnum=15
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/3/51760/state token=16/d: register slotnum=16
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/3/51776/state token=17/e: register slotnum=17
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/3/51792/state token=18/f: register slotnum=18
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/3/51808/state token=19/10: register slotnum=19
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/3/51824/state token=0/11: register slotnum=0
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/3/51840/state token=1/12: register slotnum=1
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/3/51856/state token=2/13: register slotnum=2
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/3/51872/state token=3/14: register slotnum=3
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/3/51712/state token=14/b: event epath=/local/domain/0/backend/vbd/3/51712/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51712/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/3/51712/state token=14/b: deregister slotnum=14
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ab80: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51712/state token=14/b: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/3/51728/state token=15/c: event epath=/local/domain/0/backend/vbd/3/51728/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51728/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/3/51728/state token=15/c: deregister slotnum=15
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6587d0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51728/state token=15/c: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/3/51760/state token=16/d: event epath=/local/domain/0/backend/vbd/3/51760/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51760/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/3/51760/state token=16/d: deregister slotnum=16
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658ee0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51760/state token=16/d: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/3/51776/state token=17/e: event epath=/local/domain/0/backend/vbd/3/51776/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51776/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/3/51776/state token=17/e: deregister slotnum=17
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ae80: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51776/state token=17/e: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/3/51792/state token=18/f: event epath=/local/domain/0/backend/vbd/3/51792/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51792/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/3/51792/state token=18/f: deregister slotnum=18
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b480: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51792/state token=18/f: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/3/51808/state token=19/10: event epath=/local/domain/0/backend/vbd/3/51808/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51808/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/3/51808/state token=19/10: deregister slotnum=19
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ba00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51808/state token=19/10: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/3/51824/state token=0/11: event epath=/local/domain/0/backend/vbd/3/51824/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51824/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/3/51824/state token=0/11: deregister slotnum=0
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bf70: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51824/state token=0/11: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/3/51840/state token=1/12: event epath=/local/domain/0/backend/vbd/3/51840/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51840/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/3/51840/state token=1/12: deregister slotnum=1
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c4e0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51840/state token=1/12: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/3/51856/state token=2/13: event epath=/local/domain/0/backend/vbd/3/51856/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51856/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/3/51856/state token=2/13: deregister slotnum=2
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ca40: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51856/state token=2/13: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/3/51872/state token=3/14: event epath=/local/domain/0/backend/vbd/3/51872/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51872/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/3/51872/state token=3/14: deregister slotnum=3
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cf90: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51872/state token=3/14: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ac80: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ac80: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6588d0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6588d0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b580: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b580: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bb00: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bb00: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658fe0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658fe0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64af80: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64af80: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c070: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c070: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c5e0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c5e0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cb40: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cb40: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d090: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d090: deregister unregistered
libxl: debug: libxl_linux.c:220:libxl__get_hotplug_script_info: backend_kind 6, no need to execute scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d9c0: deregister unregistered
libxl: debug: libxl.c:1720:devices_destroy_cb: forked pid 8679 for destroy of domain 3
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x63b600: complete, rc=-3
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x63b600: destroy
libxl: debug: libxl.c:1453:libxl_domain_destroy: ao 0x63b7a0: create: how=(nil) callback=(nil) poller=0x63b080
libxl: error: libxl.c:1583:libxl__destroy_domid: non-existant domain 3
libxl: error: libxl.c:1542:domain_destroy_callback: unable to destroy guest with domid 3
libxl: error: libxl.c:1471:domain_destroy_cb: destruction of domain 3 failed
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x63b7a0: complete, rc=-21
libxl: debug: libxl.c:1462:libxl_domain_destroy: ao 0x63b7a0: inprogress: poller=0x63b080, flags=ic
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x63b7a0: destroy
xencall:buffer: debug: total allocations:752 total releases:752
xencall:buffer: debug: current allocations:0 maximum allocations:2
xencall:buffer: debug: cache current size:2
xencall:buffer: debug: cache hits:725 misses:2 toobig:25

[-- Attachment #3: xl-console-under-xen4.7-v2.log --]
[-- Type: text/x-log, Size: 94565 bytes --]

Parsing config from /etc/xen/media.hvm
libxl: debug: libxl_create.c:1710:do_domain_create: ao 0x63b600: create: how=(nil) callback=(nil) poller=0x63b080
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdb, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdd, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvde, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdf, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdg, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdh, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdi, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdj, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdk, using backend phy
libxl: debug: libxl_create.c:970:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63daa8: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda intel_iommu=on console=ttyS0 console=vga init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128", features="(null)"
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/libexec/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 757 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.7, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ... 
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xc6dc4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1c6dc4
domainbuilder: detail: xc_dom_mem_init: mem 9208 MB, pages 0x23f800 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x23f800 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: xc_dom_malloc            : 24560 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000200
xc: detail:   2MB PAGES: 0x00000000000003fb
xc: detail:   1GB PAGES: 0x0000000000000007
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0xc7 at 0x7f93e69c8000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x1c7000  (pfn 0x100 + 0xc7 pages)
xc: detail: elf_load_binary: phdr 0 at 0x7f93e1d53000 -> 0x7f93e1e10238
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x1c7000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: clear_page: pfn 0xfefff, mfn 0xfefff
domainbuilder: detail: clear_page: pfn 0xfeffc, mfn 0xfeffc
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 24567 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 757 kB
domainbuilder: detail:       domU mmap          : 796 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x2ff800
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x2ff801
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/3/51712/state token=3/0: register slotnum=3
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/3/51728/state token=2/1: register slotnum=2
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/3/51760/state token=1/2: register slotnum=1
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/3/51776/state token=0/3: register slotnum=0
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/3/51792/state token=19/4: register slotnum=19
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x648230 wpath=/local/domain/0/backend/vbd/3/51808/state token=18/5: register slotnum=18
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/3/51824/state token=17/6: register slotnum=17
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/3/51840/state token=16/7: register slotnum=16
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/3/51856/state token=15/8: register slotnum=15
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/3/51872/state token=14/9: register slotnum=14
libxl: debug: libxl_create.c:1736:do_domain_create: ao 0x63b600: inprogress: poller=0x63b080, flags=i
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/3/51712/state token=3/0: event epath=/local/domain/0/backend/vbd/3/51712/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51712/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/3/51712/state token=3/0: deregister slotnum=3
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63eb20: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51712/state token=3/0: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/3/51728/state token=2/1: event epath=/local/domain/0/backend/vbd/3/51728/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51728/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/3/51728/state token=2/1: deregister slotnum=2
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fa70: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51728/state token=2/1: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/3/51760/state token=1/2: event epath=/local/domain/0/backend/vbd/3/51760/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51760/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/3/51760/state token=1/2: deregister slotnum=1
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641db0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51760/state token=1/2: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/3/51776/state token=0/3: event epath=/local/domain/0/backend/vbd/3/51776/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51776/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/3/51776/state token=0/3: deregister slotnum=0
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643aa0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51776/state token=0/3: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/3/51792/state token=19/4: event epath=/local/domain/0/backend/vbd/3/51792/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51792/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/3/51792/state token=19/4: deregister slotnum=19
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6464b0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51792/state token=19/4: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x648230 wpath=/local/domain/0/backend/vbd/3/51808/state token=18/5: event epath=/local/domain/0/backend/vbd/3/51808/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51808/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x648230 wpath=/local/domain/0/backend/vbd/3/51808/state token=18/5: deregister slotnum=18
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648230: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51808/state token=18/5: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/3/51824/state token=17/6: event epath=/local/domain/0/backend/vbd/3/51824/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51824/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/3/51824/state token=17/6: deregister slotnum=17
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x649f00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51824/state token=17/6: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/3/51840/state token=16/7: event epath=/local/domain/0/backend/vbd/3/51840/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51840/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/3/51840/state token=16/7: deregister slotnum=16
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644c00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51840/state token=16/7: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/3/51856/state token=15/8: event epath=/local/domain/0/backend/vbd/3/51856/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51856/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/3/51856/state token=15/8: deregister slotnum=15
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f200: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51856/state token=15/8: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/3/51872/state token=14/9: event epath=/local/domain/0/backend/vbd/3/51872/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51872/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/3/51872/state token=14/9: deregister slotnum=14
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x650f00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51872/state token=14/9: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ec20: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ec20: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641eb0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641eb0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f300: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f300: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644d00: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644d00: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648330: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648330: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6465b0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6465b0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64a000: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64a000: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643ba0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643ba0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x651000: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x651000: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fb70: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fb70: deregister unregistered
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/media-backend
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/media-backend-swap
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub1
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub2
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub3
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub4
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub5
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub6
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub7
libxl: debug: libxl_dm.c:1498:libxl__build_device_model_args_new: Could not find user xen-qemuuser-shared, starting QEMU as root
libxl: debug: libxl_dm.c:2092:libxl__spawn_local_dm: Spawning device-model /usr/libexec/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   /usr/libexec/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   3
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/run/xen/qmp-libxl-3,server,nowait
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -no-shutdown
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   socket,id=libxenstat-cmd,path=/run/xen/qmp-libxenstat-3,server,nowait
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   chardev=libxenstat-cmd,mode=control
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -no-user-config
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   media.hvm
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   127.0.0.1:0,to=99
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -display
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   none
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -kernel
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   /etc/xen/media-boot/vmlinuz-4.1.15-gentoo-r1
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -append
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   root=/dev/xvda intel_iommu=on console=ttyS0 console=vga init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   cirrus-vga,vgamem_mb=8
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   order=d
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   6,maxcpus=6
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   rtl8139,id=nic0,netdev=net0,mac=02:16:3e:00:00:07
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif3.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   xenfv,max-ram-below-4g=1073741824
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   9208
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/media-backend,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/media-backend-swap,if=ide,index=1,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/apub,if=ide,index=3,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: Spawning device-model /usr/libexec/xen/bin/qemu-system-i386 with additional environment:
libxl: debug: libxl_dm.c:2098:libxl__spawn_local_dm:   XEN_QEMU_CONSOLE_LIMIT=1048576
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63dda0 wpath=/local/domain/0/device-model/3/state token=14/a: register slotnum=14
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63dda0 wpath=/local/domain/0/device-model/3/state token=14/a: event epath=/local/domain/0/device-model/3/state
libxl: debug: libxl_exec.c:398:spawn_watch_event: domain 3 device model: spawn watch p=(null)
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63dda0 wpath=/local/domain/0/device-model/3/state token=14/a: deregister slotnum=14
libxl: error: libxl_dm.c:2187:device_model_spawn_outcome: domain 3 device model: spawn failed (rc=-3)
libxl: error: libxl_create.c:1422:domcreate_devmodel_started: device model did not start: -3
libxl: error: libxl_dm.c:2301:kill_device_model: Device Model already exited
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/3/51712/state token=14/b: register slotnum=14
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/3/51728/state token=15/c: register slotnum=15
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/3/51760/state token=16/d: register slotnum=16
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/3/51776/state token=17/e: register slotnum=17
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/3/51792/state token=18/f: register slotnum=18
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/3/51808/state token=19/10: register slotnum=19
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/3/51824/state token=0/11: register slotnum=0
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/3/51840/state token=1/12: register slotnum=1
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/3/51856/state token=2/13: register slotnum=2
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/3/51872/state token=3/14: register slotnum=3
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/3/51712/state token=14/b: event epath=/local/domain/0/backend/vbd/3/51712/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51712/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/3/51712/state token=14/b: deregister slotnum=14
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ab80: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51712/state token=14/b: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/3/51728/state token=15/c: event epath=/local/domain/0/backend/vbd/3/51728/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51728/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/3/51728/state token=15/c: deregister slotnum=15
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6587d0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51728/state token=15/c: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/3/51760/state token=16/d: event epath=/local/domain/0/backend/vbd/3/51760/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51760/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/3/51760/state token=16/d: deregister slotnum=16
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658ee0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51760/state token=16/d: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/3/51776/state token=17/e: event epath=/local/domain/0/backend/vbd/3/51776/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51776/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/3/51776/state token=17/e: deregister slotnum=17
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ae80: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51776/state token=17/e: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/3/51792/state token=18/f: event epath=/local/domain/0/backend/vbd/3/51792/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51792/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/3/51792/state token=18/f: deregister slotnum=18
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b480: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51792/state token=18/f: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/3/51808/state token=19/10: event epath=/local/domain/0/backend/vbd/3/51808/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51808/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/3/51808/state token=19/10: deregister slotnum=19
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ba00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51808/state token=19/10: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/3/51824/state token=0/11: event epath=/local/domain/0/backend/vbd/3/51824/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51824/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/3/51824/state token=0/11: deregister slotnum=0
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bf70: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51824/state token=0/11: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/3/51840/state token=1/12: event epath=/local/domain/0/backend/vbd/3/51840/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51840/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/3/51840/state token=1/12: deregister slotnum=1
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c4e0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51840/state token=1/12: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/3/51856/state token=2/13: event epath=/local/domain/0/backend/vbd/3/51856/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51856/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/3/51856/state token=2/13: deregister slotnum=2
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ca40: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51856/state token=2/13: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/3/51872/state token=3/14: event epath=/local/domain/0/backend/vbd/3/51872/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/3/51872/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/3/51872/state token=3/14: deregister slotnum=3
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cf90: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/3/51872/state token=3/14: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ac80: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ac80: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6588d0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6588d0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b580: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b580: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bb00: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bb00: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658fe0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658fe0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64af80: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64af80: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c070: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c070: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c5e0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c5e0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cb40: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cb40: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d090: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d090: deregister unregistered
libxl: debug: libxl_linux.c:220:libxl__get_hotplug_script_info: backend_kind 6, no need to execute scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d9c0: deregister unregistered
libxl: debug: libxl.c:1720:devices_destroy_cb: forked pid 8679 for destroy of domain 3
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x63b600: complete, rc=-3
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x63b600: destroy
libxl: debug: libxl.c:1453:libxl_domain_destroy: ao 0x63b7a0: create: how=(nil) callback=(nil) poller=0x63b080
libxl: error: libxl.c:1583:libxl__destroy_domid: non-existant domain 3
libxl: error: libxl.c:1542:domain_destroy_callback: unable to destroy guest with domid 3
libxl: error: libxl.c:1471:domain_destroy_cb: destruction of domain 3 failed
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x63b7a0: complete, rc=-21
libxl: debug: libxl.c:1462:libxl_domain_destroy: ao 0x63b7a0: inprogress: poller=0x63b080, flags=ic
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x63b7a0: destroy
xencall:buffer: debug: total allocations:752 total releases:752
xencall:buffer: debug: current allocations:0 maximum allocations:2
xencall:buffer: debug: cache current size:2
xencall:buffer: debug: cache hits:725 misses:2 toobig:25
Parsing config from /etc/xen/media.hvm
libxl: debug: libxl_create.c:1710:do_domain_create: ao 0x63b600: create: how=(nil) callback=(nil) poller=0x63b080
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdb, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdd, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvde, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdf, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdg, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdh, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdi, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdj, using backend phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=unknown
libxl: debug: libxl_device.c:382:libxl__device_disk_set_backend: Disk vdev=xvdk, using backend phy
libxl: debug: libxl_create.c:970:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63daa8: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda intel_iommu=on console=ttyS0 console=vga init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128", features="(null)"
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/libexec/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 757 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.7, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ... 
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xc6dc4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1c6dc4
domainbuilder: detail: xc_dom_mem_init: mem 9208 MB, pages 0x23f800 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x23f800 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: xc_dom_malloc            : 24560 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000200
xc: detail:   2MB PAGES: 0x00000000000003fb
xc: detail:   1GB PAGES: 0x0000000000000007
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0xc7 at 0x7f815b937000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x1c7000  (pfn 0x100 + 0xc7 pages)
xc: detail: elf_load_binary: phdr 0 at 0x7f8156cc2000 -> 0x7f8156d7f238
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x1c7000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: clear_page: pfn 0xfefff, mfn 0xfefff
domainbuilder: detail: clear_page: pfn 0xfeffc, mfn 0xfeffc
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 24567 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 757 kB
domainbuilder: detail:       domU mmap          : 796 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x2ff800
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x2ff801
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/4/51712/state token=3/0: register slotnum=3
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdb spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/4/51728/state token=2/1: register slotnum=2
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdd spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/4/51760/state token=1/2: register slotnum=1
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvde spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/4/51776/state token=0/3: register slotnum=0
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdf spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/4/51792/state token=19/4: register slotnum=19
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdg spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x648230 wpath=/local/domain/0/backend/vbd/4/51808/state token=18/5: register slotnum=18
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdh spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/4/51824/state token=17/6: register slotnum=17
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdi spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/4/51840/state token=16/7: register slotnum=16
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdj spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/4/51856/state token=15/8: register slotnum=15
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=phy
libxl: debug: libxl_device.c:347:libxl__device_disk_set_backend: Disk vdev=xvdk spec.backend=phy
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/4/51872/state token=14/9: register slotnum=14
libxl: debug: libxl_create.c:1736:do_domain_create: ao 0x63b600: inprogress: poller=0x63b080, flags=i
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/4/51712/state token=3/0: event epath=/local/domain/0/backend/vbd/4/51712/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51712/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63eb20 wpath=/local/domain/0/backend/vbd/4/51712/state token=3/0: deregister slotnum=3
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63eb20: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51712/state token=3/0: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/4/51728/state token=2/1: event epath=/local/domain/0/backend/vbd/4/51728/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51728/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63fa70 wpath=/local/domain/0/backend/vbd/4/51728/state token=2/1: deregister slotnum=2
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fa70: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51728/state token=2/1: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/4/51760/state token=1/2: event epath=/local/domain/0/backend/vbd/4/51760/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51760/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x641db0 wpath=/local/domain/0/backend/vbd/4/51760/state token=1/2: deregister slotnum=1
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641db0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51760/state token=1/2: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/4/51776/state token=0/3: event epath=/local/domain/0/backend/vbd/4/51776/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51776/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x643aa0 wpath=/local/domain/0/backend/vbd/4/51776/state token=0/3: deregister slotnum=0
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643aa0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51776/state token=0/3: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/4/51792/state token=19/4: event epath=/local/domain/0/backend/vbd/4/51792/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51792/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x6464b0 wpath=/local/domain/0/backend/vbd/4/51792/state token=19/4: deregister slotnum=19
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6464b0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51792/state token=19/4: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x648230 wpath=/local/domain/0/backend/vbd/4/51808/state token=18/5: event epath=/local/domain/0/backend/vbd/4/51808/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51808/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x648230 wpath=/local/domain/0/backend/vbd/4/51808/state token=18/5: deregister slotnum=18
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648230: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51808/state token=18/5: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/4/51824/state token=17/6: event epath=/local/domain/0/backend/vbd/4/51824/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51824/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x649f00 wpath=/local/domain/0/backend/vbd/4/51824/state token=17/6: deregister slotnum=17
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x649f00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51824/state token=17/6: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/4/51840/state token=16/7: event epath=/local/domain/0/backend/vbd/4/51840/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51840/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x644c00 wpath=/local/domain/0/backend/vbd/4/51840/state token=16/7: deregister slotnum=16
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644c00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51840/state token=16/7: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/4/51856/state token=15/8: event epath=/local/domain/0/backend/vbd/4/51856/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51856/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64f200 wpath=/local/domain/0/backend/vbd/4/51856/state token=15/8: deregister slotnum=15
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f200: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51856/state token=15/8: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/4/51872/state token=14/9: event epath=/local/domain/0/backend/vbd/4/51872/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51872/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x650f00 wpath=/local/domain/0/backend/vbd/4/51872/state token=14/9: deregister slotnum=14
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x650f00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block add 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51872/state token=14/9: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64a000: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64a000: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fb70: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63fb70: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648330: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x648330: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641eb0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x641eb0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f300: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64f300: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643ba0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x643ba0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6465b0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6465b0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ec20: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ec20: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644d00: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x644d00: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x651000: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x651000: deregister unregistered
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/media-backend
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/media-backend-swap
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub1
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub2
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub3
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub4
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub5
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub6
libxl: debug: libxl.c:3156:libxl__device_disk_find_local_path: Directly accessing local RAW disk /dev/system/apub7
libxl: debug: libxl_dm.c:1498:libxl__build_device_model_args_new: Could not find user xen-qemuuser-shared, starting QEMU as root
libxl: debug: libxl_dm.c:2092:libxl__spawn_local_dm: Spawning device-model /usr/libexec/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   /usr/libexec/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/run/xen/qmp-libxl-4,server,nowait
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -no-shutdown
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   socket,id=libxenstat-cmd,path=/run/xen/qmp-libxenstat-4,server,nowait
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   chardev=libxenstat-cmd,mode=control
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -no-user-config
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   media.hvm
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   127.0.0.1:0,to=99
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -display
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   none
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -kernel
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   /etc/xen/media-boot/vmlinuz-4.1.15-gentoo-r1
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -append
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   root=/dev/xvda intel_iommu=on console=ttyS0 console=vga init=/usr/lib/systemd/systemd elevator=deadline xen_blkfront.max=128
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   cirrus-vga,vgamem_mb=8
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   order=d
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   6,maxcpus=6
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   rtl8139,id=nic0,netdev=net0,mac=02:16:3e:00:00:07
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif4.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   xenfv,max-ram-below-4g=1073741824
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   9208
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/media-backend,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/media-backend-swap,if=ide,index=1,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm:   file=/dev/system/apub,if=ide,index=3,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: Spawning device-model /usr/libexec/xen/bin/qemu-system-i386 with additional environment:
libxl: debug: libxl_dm.c:2098:libxl__spawn_local_dm:   XEN_QEMU_CONSOLE_LIMIT=1048576
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63dda0 wpath=/local/domain/0/device-model/4/state token=14/a: register slotnum=14
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63dda0 wpath=/local/domain/0/device-model/4/state token=14/a: event epath=/local/domain/0/device-model/4/state
libxl: debug: libxl_exec.c:398:spawn_watch_event: domain 4 device model: spawn watch p=(null)
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63dda0 wpath=/local/domain/0/device-model/4/state token=14/a: deregister slotnum=14
libxl: error: libxl_dm.c:2187:device_model_spawn_outcome: domain 4 device model: spawn failed (rc=-3)
libxl: error: libxl_create.c:1422:domcreate_devmodel_started: device model did not start: -3
libxl: error: libxl_dm.c:2301:kill_device_model: Device Model already exited
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/4/51712/state token=14/b: register slotnum=14
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/4/51728/state token=15/c: register slotnum=15
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/4/51760/state token=16/d: register slotnum=16
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/4/51776/state token=17/e: register slotnum=17
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/4/51792/state token=18/f: register slotnum=18
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/4/51808/state token=19/10: register slotnum=19
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/4/51824/state token=0/11: register slotnum=0
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/4/51840/state token=1/12: register slotnum=1
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/4/51856/state token=2/13: register slotnum=2
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/4/51872/state token=3/14: register slotnum=3
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/4/51712/state token=14/b: event epath=/local/domain/0/backend/vbd/4/51712/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51712/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x63ab80 wpath=/local/domain/0/backend/vbd/4/51712/state token=14/b: deregister slotnum=14
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ab80: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51712/state token=14/b: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/4/51728/state token=15/c: event epath=/local/domain/0/backend/vbd/4/51728/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51728/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x6587d0 wpath=/local/domain/0/backend/vbd/4/51728/state token=15/c: deregister slotnum=15
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6587d0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51728/state token=15/c: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/4/51760/state token=16/d: event epath=/local/domain/0/backend/vbd/4/51760/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51760/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x658ee0 wpath=/local/domain/0/backend/vbd/4/51760/state token=16/d: deregister slotnum=16
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658ee0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51760/state token=16/d: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/4/51776/state token=17/e: event epath=/local/domain/0/backend/vbd/4/51776/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51776/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ae80 wpath=/local/domain/0/backend/vbd/4/51776/state token=17/e: deregister slotnum=17
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ae80: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51776/state token=17/e: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/4/51792/state token=18/f: event epath=/local/domain/0/backend/vbd/4/51792/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51792/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64b480 wpath=/local/domain/0/backend/vbd/4/51792/state token=18/f: deregister slotnum=18
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b480: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51792/state token=18/f: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/4/51808/state token=19/10: event epath=/local/domain/0/backend/vbd/4/51808/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51808/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ba00 wpath=/local/domain/0/backend/vbd/4/51808/state token=19/10: deregister slotnum=19
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ba00: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51808/state token=19/10: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/4/51824/state token=0/11: event epath=/local/domain/0/backend/vbd/4/51824/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51824/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64bf70 wpath=/local/domain/0/backend/vbd/4/51824/state token=0/11: deregister slotnum=0
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bf70: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51824/state token=0/11: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/4/51840/state token=1/12: event epath=/local/domain/0/backend/vbd/4/51840/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51840/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64c4e0 wpath=/local/domain/0/backend/vbd/4/51840/state token=1/12: deregister slotnum=1
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c4e0: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51840/state token=1/12: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/4/51856/state token=2/13: event epath=/local/domain/0/backend/vbd/4/51856/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51856/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64ca40 wpath=/local/domain/0/backend/vbd/4/51856/state token=2/13: deregister slotnum=2
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64ca40: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51856/state token=2/13: empty slot
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/4/51872/state token=3/14: event epath=/local/domain/0/backend/vbd/4/51872/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vbd/4/51872/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x64cf90 wpath=/local/domain/0/backend/vbd/4/51872/state token=3/14: deregister slotnum=3
libxl: debug: libxl_device.c:1072:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cf90: deregister unregistered
libxl: debug: libxl_linux.c:182:libxl__hotplug_disk: Args and environment ready
libxl: debug: libxl_device.c:1169:device_hotplug: calling hotplug script: /etc/xen/scripts/block remove
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/block remove 
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vbd/4/51872/state token=3/14: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658fe0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x658fe0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ac80: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x63ac80: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6588d0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x6588d0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64af80: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64af80: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b580: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64b580: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bb00: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64bb00: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c070: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c070: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c5e0: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64c5e0: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cb40: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64cb40: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d090: deregister unregistered
libxl: debug: libxl_linux.c:199:libxl__get_hotplug_script_info: num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d090: deregister unregistered
libxl: debug: libxl_linux.c:220:libxl__get_hotplug_script_info: backend_kind 6, no need to execute scripts
libxl: debug: libxl_device.c:1156:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x64d9c0: deregister unregistered
libxl: debug: libxl.c:1720:devices_destroy_cb: forked pid 10008 for destroy of domain 4
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x63b600: complete, rc=-3
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x63b600: destroy
libxl: debug: libxl.c:1453:libxl_domain_destroy: ao 0x63b7a0: create: how=(nil) callback=(nil) poller=0x63b080
libxl: error: libxl.c:1583:libxl__destroy_domid: non-existant domain 4
libxl: error: libxl.c:1542:domain_destroy_callback: unable to destroy guest with domid 4
libxl: error: libxl.c:1471:domain_destroy_cb: destruction of domain 4 failed
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x63b7a0: complete, rc=-21
libxl: debug: libxl.c:1462:libxl_domain_destroy: ao 0x63b7a0: inprogress: poller=0x63b080, flags=ic
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x63b7a0: destroy
xencall:buffer: debug: total allocations:752 total releases:752
xencall:buffer: debug: current allocations:0 maximum allocations:2
xencall:buffer: debug: cache current size:2
xencall:buffer: debug: cache hits:725 misses:2 toobig:25

[-- Attachment #4: hypervisor-console.log --]
[-- Type: text/x-log, Size: 5371 bytes --]

(XEN) [2016-08-30 21:11:34] HVM2 save: CPU
(XEN) [2016-08-30 21:11:34] HVM2 save: PIC
(XEN) [2016-08-30 21:11:34] HVM2 save: IOAPIC
(XEN) [2016-08-30 21:11:34] HVM2 save: LAPIC
(XEN) [2016-08-30 21:11:34] HVM2 save: LAPIC_REGS
(XEN) [2016-08-30 21:11:34] HVM2 save: PCI_IRQ
(XEN) [2016-08-30 21:11:34] HVM2 save: ISA_IRQ
(XEN) [2016-08-30 21:11:34] HVM2 save: PCI_LINK
(XEN) [2016-08-30 21:11:34] HVM2 save: PIT
(XEN) [2016-08-30 21:11:34] HVM2 save: RTC
(XEN) [2016-08-30 21:11:34] HVM2 save: HPET
(XEN) [2016-08-30 21:11:34] HVM2 save: PMTIMER
(XEN) [2016-08-30 21:11:34] HVM2 save: MTRR
(XEN) [2016-08-30 21:11:34] HVM2 save: VIRIDIAN_DOMAIN
(XEN) [2016-08-30 21:11:34] HVM2 save: CPU_XSAVE
(XEN) [2016-08-30 21:11:34] HVM2 save: VIRIDIAN_VCPU
(XEN) [2016-08-30 21:11:34] HVM2 save: VMCE_VCPU
(XEN) [2016-08-30 21:11:34] HVM2 save: TSC_ADJUST
(XEN) [2016-08-30 21:11:34] HVM2 restore: CPU 0
ssh: connect to host media port 22: No route to host
(XEN) [2016-08-30 21:11:36] d0v2 Over-allocation for domain 2: 786689 > 786688
(XEN) [2016-08-30 21:11:36] memory.c:209:d0v2 Could not allocate order=0 extent: id=2 memflags=0 (208 of 512)
(XEN) [2016-08-30 21:11:40] HVM3 save: CPU
(XEN) [2016-08-30 21:11:40] HVM3 save: PIC
(XEN) [2016-08-30 21:11:40] HVM3 save: IOAPIC
(XEN) [2016-08-30 21:11:40] HVM3 save: LAPIC
(XEN) [2016-08-30 21:11:40] HVM3 save: LAPIC_REGS
(XEN) [2016-08-30 21:11:40] HVM3 save: PCI_IRQ
(XEN) [2016-08-30 21:11:40] HVM3 save: ISA_IRQ
(XEN) [2016-08-30 21:11:40] HVM3 save: PCI_LINK
(XEN) [2016-08-30 21:11:40] HVM3 save: PIT
(XEN) [2016-08-30 21:11:40] HVM3 save: RTC
(XEN) [2016-08-30 21:11:40] HVM3 save: HPET
(XEN) [2016-08-30 21:11:40] HVM3 save: PMTIMER
(XEN) [2016-08-30 21:11:40] HVM3 save: MTRR
(XEN) [2016-08-30 21:11:40] HVM3 save: VIRIDIAN_DOMAIN
(XEN) [2016-08-30 21:11:40] HVM3 save: CPU_XSAVE
(XEN) [2016-08-30 21:11:40] HVM3 save: VIRIDIAN_VCPU
(XEN) [2016-08-30 21:11:40] HVM3 save: VMCE_VCPU
(XEN) [2016-08-30 21:11:40] HVM3 save: TSC_ADJUST
(XEN) [2016-08-30 21:11:40] HVM3 restore: CPU 0
(XEN) [2016-08-30 21:11:42] d0v0 Over-allocation for domain 3: 2359553 > 2359552
(XEN) [2016-08-30 21:11:42] memory.c:209:d0v0 Could not allocate order=0 extent: id=3 memflags=0 (192 of 512)
media.hvm is an invalid domain identifier (rc=-6)
host                   : gentoo
release                : 4.7.2-gentoo
version                : #8 SMP Mon Aug 29 17:49:49 CEST 2016
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 2400
hw_caps                : b7ebfbff:77fef3ff:2c100800:00000021:00000001:000037ab:00000000:00000100
virt_caps              : hvm hvm_directio
total_memory           : 65376
free_memory            : 39445
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 7
xen_extra              : .0
xen_version            : 4.7.0
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 
xen_commandline        : ssd-xen-dbg-noidle-autogen-marker console_timestamps=date loglvl=all guest_loglvl=all sync_console iommu=1,verbose,debug iommu_inclusive_mapping=1 com1=115200,8n1 console=com1 dom0_max_vcpus=4 dom0_vcpus_pin=1 dom0_mem=7G,max:7G cpufreq=xen,performance,verbose sched_smt_power_savings=1 apic_verbosity=debug e820-verbose=1 core_parking=power cpuidle=0
cc_compiler            : x86_64-pc-linux-gnu-gcc (Gentoo 5.4.0 p1.0, pie-0.6.5) 5.4.0
cc_compile_by          : 
cc_compile_domain      : alstadheim.priv.no
cc_compile_date        : Tue Aug 30 23:03:24 CEST 2016
build_id               : deca0be472f0e46d4234c7f0ae8bb5d5aced76db
xend_config_format     : 4
<28>Aug 30 23:11:45 total-start: Kan ikke starte media.hvm
(XEN) [2016-08-30 21:14:19] HVM4 save: CPU
(XEN) [2016-08-30 21:14:19] HVM4 save: PIC
(XEN) [2016-08-30 21:14:19] HVM4 save: IOAPIC
(XEN) [2016-08-30 21:14:19] HVM4 save: LAPIC
(XEN) [2016-08-30 21:14:19] HVM4 save: LAPIC_REGS
(XEN) [2016-08-30 21:14:19] HVM4 save: PCI_IRQ
(XEN) [2016-08-30 21:14:19] HVM4 save: ISA_IRQ
(XEN) [2016-08-30 21:14:19] HVM4 save: PCI_LINK
(XEN) [2016-08-30 21:14:19] HVM4 save: PIT
(XEN) [2016-08-30 21:14:19] HVM4 save: RTC
(XEN) [2016-08-30 21:14:19] HVM4 save: HPET
(XEN) [2016-08-30 21:14:19] HVM4 save: PMTIMER
(XEN) [2016-08-30 21:14:19] HVM4 save: MTRR
(XEN) [2016-08-30 21:14:19] HVM4 save: VIRIDIAN_DOMAIN
(XEN) [2016-08-30 21:14:19] HVM4 save: CPU_XSAVE
(XEN) [2016-08-30 21:14:19] HVM4 save: VIRIDIAN_VCPU
(XEN) [2016-08-30 21:14:19] HVM4 save: VMCE_VCPU
(XEN) [2016-08-30 21:14:19] HVM4 save: TSC_ADJUST
(XEN) [2016-08-30 21:14:19] HVM4 restore: CPU 0
(XEN) [2016-08-30 21:14:21] d0v3 Over-allocation for domain 4: 2359553 > 2359552
(XEN) [2016-08-30 21:14:21] memory.c:209:d0v3 Could not allocate order=0 extent: id=4 memflags=0 (192 of 512)
(XEN) [2016-08-30 21:16:43] [VT-D]d1:PCIe: unmap 0000:07:00.0
(XEN) [2016-08-30 21:16:43] [VT-D]d0:PCIe: map 0000:07:00.0
(XEN) [2016-08-30 21:17:08] Hardware Dom0 shutdown: rebooting machine
(XEN) [2016-08-30 21:17:08] [VT-D]iommu.c:1045: Set iommu interrupt affinity error!

[-- Attachment #5: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-08-25 21:18   ` linux
  2016-08-26 10:19     ` Håkon Alstadheim
@ 2016-09-05  9:20     ` linux
  2016-09-05  9:25       ` Wei Liu
  2016-09-05  9:46       ` Jan Beulich
  1 sibling, 2 replies; 24+ messages in thread
From: linux @ 2016-09-05  9:20 UTC (permalink / raw)
  To: Doug Goldstein, Jan Beulich, Wei Liu; +Cc: xen-devel

On 2016-08-25 23:18, linux@eikelenboom.it wrote:
> On 2016-08-25 22:34, Doug Goldstein wrote:
>> On 8/25/16 4:21 PM, linux@eikelenboom.it wrote:
>>> Today i tried to switch some of my HVM guests (qemu-xen) from booting 
>>> of
>>> a kernel *inside* the guest, to a dom0 supplied kernel, which is
>>> described as "Direct Kernel Boot" here:
>>> https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html :
>>> 
>>>     Direct Kernel Boot
>>> 
>>>     Direct kernel boot allows booting directly from a kernel and 
>>> initrd
>>> stored in the host physical
>>>     machine OS, allowing command line arguments to be passed 
>>> directly.
>>> PV guest direct kernel boot
>>>     is supported. HVM guest direct kernel boot is supported with
>>> limitation (it's supported when
>>>     using qemu-xen and default BIOS 'seabios'; not supported in case 
>>> of
>>> stubdom-dm and old rombios.)
>>> 
>>>     kernel="PATHNAME"    Load the specified file as the kernel image.
>>>     ramdisk="PATHNAME"   Load the specified file as the ramdisk.
>>> 
>>> But qemu fails to start, output appended below.
>>> 
>>> I tested with:
>>> - current Xen-unstable, which fails.
>>> - xen-stable-4.7.0 release, which fails.
>>> - xen-stable-4.6.0 release, works fine.
>> 
>> Can you include the logs from xl dmesg around that time frame as well?
> 
> Ah i thought there wasn't any, but didn't check thoroughly or wasn't 
> there
> since the release builds are non-debug by default.
> 
> However, back on xen-unstable:
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PIC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: IOAPIC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC_REGS
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_IRQ
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: ISA_IRQ
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_LINK
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PIT
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: RTC
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: HPET
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: PMTIMER
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: MTRR
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_DOMAIN
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU_XSAVE
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_VCPU
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: VMCE_VCPU
> (XEN) [2016-08-25 21:09:15.172] HVM19 save: TSC_ADJUST
> (XEN) [2016-08-25 21:09:15.172] HVM19 restore: CPU 0
> (XEN) [2016-08-25 21:09:16.126] d0v1 Over-allocation for domain 19:
> 262401 > 262400
> (XEN) [2016-08-25 21:09:16.126] memory.c:213:d0v1 Could not allocate
> order=0 extent: id=19 memflags=0 (192 of 512)
> 
> Hmm some off by one issue ?
> 
> 
>> Just wondering how much RAM you're domain is defined with as well?
> 
> 1024 Mb, there is more than enough unallocated memory for xen to start
> the guest (and dom0 is fixed with dom0_mem=1536M,max:1536M and
> ballooning is off)


Hmm it seems my thread was kind of hijacked and i was dropped from the 
CC.

I had some time and bisected the issue and it resulted in:

5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f is the first bad commit
commit 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Oct 21 10:56:31 2015 +0200

     x86/shadow: drop stray name tags from sh_{guest_get,map}_eff_l1e()

     They (as a now being removed comment validly says) depend only on 
Xen's
     number of page table levels, and hence their tags didn't serve any
     useful purpose (there could only ever be one instance in a single
     binary, even back in the x86-32 days).

     Further conditionalize the inclusion of PV-specific hook pointers, 
at
     once making sure that PV guests can't ever get other than 4-level 
mode
     enabled for them.

     For consistency reasons shadow_{write,cmpxchg}_guest_entry() also 
get
     moved next to the other PV-only actors, allowing them to become 
static
     just like the $subject ones do.

     Signed-off-by: Jan Beulich <jbeulich@suse.com>
     Acked-by: Tim Deegan <tim@xen.org>

:040000 040000 0c2e3475f81547f934a5960d9f1ac4849707d4ed 
f17f5ff17ca50d6ab908afe9a2d8555d954d3d0a M  xen


--
Sander


> 
> --
> Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05  9:20     ` linux
@ 2016-09-05  9:25       ` Wei Liu
  2016-09-05  9:46       ` Jan Beulich
  1 sibling, 0 replies; 24+ messages in thread
From: Wei Liu @ 2016-09-05  9:25 UTC (permalink / raw)
  To: linux; +Cc: Wei Liu, Doug Goldstein, Jan Beulich, xen-devel

On Mon, Sep 05, 2016 at 11:20:30AM +0200, linux@eikelenboom.it wrote:
> On 2016-08-25 23:18, linux@eikelenboom.it wrote:
> >On 2016-08-25 22:34, Doug Goldstein wrote:
> >>On 8/25/16 4:21 PM, linux@eikelenboom.it wrote:
> >>>Today i tried to switch some of my HVM guests (qemu-xen) from booting
> >>>of
> >>>a kernel *inside* the guest, to a dom0 supplied kernel, which is
> >>>described as "Direct Kernel Boot" here:
> >>>https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html :
> >>>
> >>>    Direct Kernel Boot
> >>>
> >>>    Direct kernel boot allows booting directly from a kernel and
> >>>initrd
> >>>stored in the host physical
> >>>    machine OS, allowing command line arguments to be passed directly.
> >>>PV guest direct kernel boot
> >>>    is supported. HVM guest direct kernel boot is supported with
> >>>limitation (it's supported when
> >>>    using qemu-xen and default BIOS 'seabios'; not supported in case
> >>>of
> >>>stubdom-dm and old rombios.)
> >>>
> >>>    kernel="PATHNAME"    Load the specified file as the kernel image.
> >>>    ramdisk="PATHNAME"   Load the specified file as the ramdisk.
> >>>
> >>>But qemu fails to start, output appended below.
> >>>
> >>>I tested with:
> >>>- current Xen-unstable, which fails.
> >>>- xen-stable-4.7.0 release, which fails.
> >>>- xen-stable-4.6.0 release, works fine.
> >>
> >>Can you include the logs from xl dmesg around that time frame as well?
> >
> >Ah i thought there wasn't any, but didn't check thoroughly or wasn't there
> >since the release builds are non-debug by default.
> >
> >However, back on xen-unstable:
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: PIC
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: IOAPIC
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: LAPIC_REGS
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_IRQ
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: ISA_IRQ
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: PCI_LINK
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: PIT
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: RTC
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: HPET
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: PMTIMER
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: MTRR
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_DOMAIN
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: CPU_XSAVE
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: VIRIDIAN_VCPU
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: VMCE_VCPU
> >(XEN) [2016-08-25 21:09:15.172] HVM19 save: TSC_ADJUST
> >(XEN) [2016-08-25 21:09:15.172] HVM19 restore: CPU 0
> >(XEN) [2016-08-25 21:09:16.126] d0v1 Over-allocation for domain 19:
> >262401 > 262400
> >(XEN) [2016-08-25 21:09:16.126] memory.c:213:d0v1 Could not allocate
> >order=0 extent: id=19 memflags=0 (192 of 512)
> >
> >Hmm some off by one issue ?
> >
> >
> >>Just wondering how much RAM you're domain is defined with as well?
> >
> >1024 Mb, there is more than enough unallocated memory for xen to start
> >the guest (and dom0 is fixed with dom0_mem=1536M,max:1536M and
> >ballooning is off)
> 
> 
> Hmm it seems my thread was kind of hijacked and i was dropped from the CC.
> 

Oops, I thought you were CC'ed. Sorry.

> I had some time and bisected the issue and it resulted in:
> 
> 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f is the first bad commit
> commit 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f
> Author: Jan Beulich <jbeulich@suse.com>
> Date:   Wed Oct 21 10:56:31 2015 +0200
> 
>     x86/shadow: drop stray name tags from sh_{guest_get,map}_eff_l1e()
> 
>     They (as a now being removed comment validly says) depend only on Xen's
>     number of page table levels, and hence their tags didn't serve any
>     useful purpose (there could only ever be one instance in a single
>     binary, even back in the x86-32 days).
> 
>     Further conditionalize the inclusion of PV-specific hook pointers, at
>     once making sure that PV guests can't ever get other than 4-level mode
>     enabled for them.
> 
>     For consistency reasons shadow_{write,cmpxchg}_guest_entry() also get
>     moved next to the other PV-only actors, allowing them to become static
>     just like the $subject ones do.
> 
>     Signed-off-by: Jan Beulich <jbeulich@suse.com>
>     Acked-by: Tim Deegan <tim@xen.org>
> 
> :040000 040000 0c2e3475f81547f934a5960d9f1ac4849707d4ed
> f17f5ff17ca50d6ab908afe9a2d8555d954d3d0a M  xen
> 

Unfortunately I can't see immediately why this would affect QEMU direct
boot. It also suggests that it only affects shadow code -- what kind of
hardware are you using?

Wei.

> 
> --
> Sander
> 
> 
> >
> >--
> >Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05  9:20     ` linux
  2016-09-05  9:25       ` Wei Liu
@ 2016-09-05  9:46       ` Jan Beulich
  2016-09-05 10:02         ` linux
  1 sibling, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2016-09-05  9:46 UTC (permalink / raw)
  To: linux; +Cc: WeiLiu, Doug Goldstein, xen-devel

>>> On 05.09.16 at 11:20, <linux@eikelenboom.it> wrote:
> Hmm it seems my thread was kind of hijacked and i was dropped from the 
> CC.
> 
> I had some time and bisected the issue and it resulted in:
> 
> 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f is the first bad commit
> commit 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f
> Author: Jan Beulich <jbeulich@suse.com>
> Date:   Wed Oct 21 10:56:31 2015 +0200
> 
>      x86/shadow: drop stray name tags from sh_{guest_get,map}_eff_l1e()

Hmm, as Wei already indicated - that's rather odd. The commit isn't
really supposed to have any effect on functionality (and going
through it again I also can't spot any now). And are you indeed
using shadow mode, and if so does your problem not occur when
you use HAP instead?

In any event, if there was some hidden (and unintended) change
in functionality here, then the most likely result would seem to be
a crash, yet from the log fragment you posted it doesn't look like
there's _any_ relevant hypervisor output.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05  9:46       ` Jan Beulich
@ 2016-09-05 10:02         ` linux
  2016-09-05 10:25           ` Jan Beulich
  0 siblings, 1 reply; 24+ messages in thread
From: linux @ 2016-09-05 10:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: WeiLiu, Doug Goldstein, xen-devel

[-- Attachment #1: Type: text/plain, Size: 1882 bytes --]

On 2016-09-05 11:46, Jan Beulich wrote:
>>>> On 05.09.16 at 11:20, <linux@eikelenboom.it> wrote:
>> Hmm it seems my thread was kind of hijacked and i was dropped from the
>> CC.
>> 
>> I had some time and bisected the issue and it resulted in:
>> 
>> 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f is the first bad commit
>> commit 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f
>> Author: Jan Beulich <jbeulich@suse.com>
>> Date:   Wed Oct 21 10:56:31 2015 +0200
>> 
>>      x86/shadow: drop stray name tags from 
>> sh_{guest_get,map}_eff_l1e()
> 
> Hmm, as Wei already indicated - that's rather odd. The commit isn't
> really supposed to have any effect on functionality (and going
> through it again I also can't spot any now). And are you indeed
> using shadow mode, and if so does your problem not occur when
> you use HAP instead?
> 
> In any event, if there was some hidden (and unintended) change
> in functionality here, then the most likely result would seem to be
> a crash, yet from the log fragment you posted it doesn't look like
> there's _any_ relevant hypervisor output.
> 
> Jan

Hmm i was already afraid of that.
Attached is the output of xl dmesg, HAP is supported and should be 
enabled by default (and i didn't disable it explicitly in my guest.cfg).

I just tried the opposite and specified hap=0 in my guest.cfg and this 
case leads to 2 lines of additional output:

XEN) [2016-09-05 09:58:22.201] sh error: sh_remove_all_mappings(): can't 
find all mappings of mfn 471b69: c=8000000000000003 t=7400000000000001
(XEN) [2016-09-05 09:58:22.201] sh error: sh_remove_all_mappings(): 
can't find all mappings of mfn 471b68: c=8000000000000003 
t=7400000000000001
(XEN) [2016-09-05 09:58:22.334] d0v5 Over-allocation for domain 3: 
262401 > 262400
(XEN) [2016-09-05 09:58:22.334] memory.c:163:d0v5 Could not allocate 
order=0 extent: id=3 memflags=0 (192 of 512)

--
Sander

[-- Attachment #2: xl-dmesg.txt --]
[-- Type: text/plain, Size: 27983 bytes --]

 __  __            _  _   _____                  _        _     _      
 \ \/ /___ _ __   | || | |___  | _   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \  | || |_   / /_| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _| / /__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_)_/    \__,_|_| |_|___/\__\__,_|_.__/|_|\___|
                                                                       
(XEN) Xen version 4.7-unstable (root@dyndns.org) (gcc-4.9.real (Debian 4.9.2-10) 4.9.2) debug=y Mon Sep  5 11:03:14 CEST 2016
(XEN) Latest ChangeSet: Wed Oct 21 10:56:31 2015 +0200 git:5a3ce8f
(XEN) Bootloader: GRUB 2.02~beta2-22+deb8u1
(XEN) Command line: dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps=datems vga=gfx-1280x1024x32 no-cpuidle cpufreq=xen com1=38400,8n1 console=vga,com1 ivrs_ioapic[6]=00:14.0 iommu=on,verbose,debug,amd-iommu-debug conring_size=128k sched=credit2 ucode=-1
(XEN) Video information:
(XEN)  VGA is graphics mode 1280x1024, 32 bpp
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN)  EDID info not retrieved because of reasons unknown
(XEN) Disc information:
(XEN)  Found 2 MBR signatures
(XEN)  Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099400 (usable)
(XEN)  0000000000099400 - 00000000000a0000 (reserved)
(XEN)  00000000000e4000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 000000009ff90000 (usable)
(XEN)  000000009ff90000 - 000000009ff9e000 (ACPI data)
(XEN)  000000009ff9e000 - 000000009ffe0000 (ACPI NVS)
(XEN)  000000009ffe0000 - 00000000a0000000 (reserved)
(XEN)  00000000ffe00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000560000000 (usable)
(XEN) ACPI: RSDP 000FB100, 0014 (r0 ACPIAM)
(XEN) ACPI: RSDT 9FF90000, 0048 (r1 MSI    OEMSLIC  20100913 MSFT       97)
(XEN) ACPI: FACP 9FF90200, 0084 (r1 7640MS A7640100 20100913 MSFT       97)
(XEN) ACPI: DSDT 9FF905E0, 9427 (r1  A7640 A7640100      100 INTL 20051117)
(XEN) ACPI: FACS 9FF9E000, 0040
(XEN) ACPI: APIC 9FF90390, 0088 (r1 7640MS A7640100 20100913 MSFT       97)
(XEN) ACPI: MCFG 9FF90420, 003C (r1 7640MS OEMMCFG  20100913 MSFT       97)
(XEN) ACPI: SLIC 9FF90460, 0176 (r1 MSI    OEMSLIC  20100913 MSFT       97)
(XEN) ACPI: OEMB 9FF9E040, 0072 (r1 7640MS A7640100 20100913 MSFT       97)
(XEN) ACPI: SRAT 9FF9A5E0, 0108 (r3 AMD    FAM_F_10        2 AMD         1)
(XEN) ACPI: HPET 9FF9A6F0, 0038 (r1 7640MS OEMHPET  20100913 MSFT       97)
(XEN) ACPI: IVRS 9FF9A730, 0110 (r1  AMD     RD890S   202031 AMD         0)
(XEN) ACPI: SSDT 9FF9A840, 0DA4 (r1 A M I  POWERNOW        1 AMD         1)
(XEN) System RAM: 20479MB (20970660kB)
(XEN) SRAT: PXM 0 -> APIC 00 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 01 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 02 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 03 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 04 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 05 -> Node 0
(XEN) SRAT: Node 0 PXM 0 0-a0000
(XEN) SRAT: Node 0 PXM 0 100000-a0000000
(XEN) SRAT: Node 0 PXM 0 100000000-560000000
(XEN) NUMA: Allocated memnodemap from 55c797000 - 55c79d000
(XEN) NUMA: Using 8 for the hash shift.
(XEN) Domain heap initialised
(XEN) Allocated console ring of 128 KiB.
(XEN) vesafb: framebuffer at 0xd0000000, mapped to 0xffff82c000201000, using 6144k, total 16384k
(XEN) vesafb: mode is 1280x1024x32, linelength=5120, font 8x16
(XEN) vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0
(XEN) found SMP MP-table at 000ff780
(XEN) DMI present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1:804,1:0], pm1x_evt[1:800,1:0]
(XEN) ACPI:             wakeup_vec[9ff9e00c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
(XEN) IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-55
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 2 I/O APICs
(XEN) ACPI: HPET id: 0x8300 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 6 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 56 GSI, 1112 MSI/MSI-X
(XEN) AMD Fam10h machine check reporting enabled
(XEN) Using scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) Initializing Credit2 scheduler
(XEN)  WARNING: This is experimental software in development.
(XEN)  Use at your own risk.
(XEN)  load_window_shift: 18
(XEN)  underload_balance_tolerance: 0
(XEN)  overload_balance_tolerance: -3
(XEN) csched2_dom_init: Initializing domain 32767
(XEN) csched2_vcpu_insert: Inserting IDLEv0
(XEN) Adding cpu 0 to runqueue 0
(XEN)  First cpu on runqueue, activating
(XEN) Detected 3200.176 MHz processor.
(XEN) Initing memory sharing.
(XEN) alt table ffff82d0802e9e90 -> ffff82d0802eb144
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: Not using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0x110
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x8f
(XEN) AMD-Vi:  OEM_Id AMD  
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD 
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xe0 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xf00 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xf00 -> 0xf01
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x18 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xe00 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xe00 -> 0xe01
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xd00 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xc00 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xb00 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x50 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa00 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x900 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x900 -> 0x901
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x60 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x608 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x800 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x610 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x700 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x68 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x400 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x300 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x300 -> 0x3ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa8 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa9 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x100 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x7
(XEN) AMD-Vi: Disabled HAP memory map sharing with IOMMU
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) [2016-09-05 09:06:59.143] Platform timer is 14.318MHz HPET
(XEN) [2016-09-05 09:06:59.165] microcode: CPU0 updated from revision 0x10000bf to 0x10000dc
(XEN) [2016-09-05 09:06:59.173] HVM: ASIDs enabled.
(XEN) [2016-09-05 09:06:59.181] SVM: Supported advanced features:
(XEN) [2016-09-05 09:06:59.188]  - Nested Page Tables (NPT)
(XEN) [2016-09-05 09:06:59.196]  - Last Branch Record (LBR) Virtualisation
(XEN) [2016-09-05 09:06:59.204]  - Next-RIP Saved on #VMEXIT
(XEN) [2016-09-05 09:06:59.212]  - Pause-Intercept Filter
(XEN) [2016-09-05 09:06:59.219] HVM: SVM enabled
(XEN) [2016-09-05 09:06:59.227] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2016-09-05 09:06:59.235] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2016-09-05 09:06:59.243] HVM: PVH mode not supported on this platform
(XEN) [2016-09-05 09:06:59.251] csched2_vcpu_insert: Inserting IDLEv1
(XEN) [2016-09-05 09:06:59.259] Adding cpu 1 to runqueue 0
(XEN) [2016-09-05 09:06:59.288] init_pcpu: Strange, cpu 1 already initialized!
(XEN) [2016-09-05 09:06:59.296] csched2_vcpu_insert: Inserting IDLEv2
(XEN) [2016-09-05 09:06:59.304] microcode: CPU1 updated from revision 0x10000bf to 0x10000dc
(XEN) [2016-09-05 09:06:59.312] Adding cpu 2 to runqueue 0
(XEN) [2016-09-05 09:06:59.340] init_pcpu: Strange, cpu 2 already initialized!
(XEN) [2016-09-05 09:06:59.349] csched2_vcpu_insert: Inserting IDLEv3
(XEN) [2016-09-05 09:06:59.357] microcode: CPU2 updated from revision 0x10000bf to 0x10000dc
(XEN) [2016-09-05 09:06:59.365] Adding cpu 3 to runqueue 0
(XEN) [2016-09-05 09:06:59.394] init_pcpu: Strange, cpu 3 already initialized!
(XEN) [2016-09-05 09:06:59.403] microcode: CPU3 updated from revision 0x10000bf to 0x10000dc
(XEN) [2016-09-05 09:06:59.411] csched2_vcpu_insert: Inserting IDLEv4
(XEN) [2016-09-05 09:06:59.420] Adding cpu 4 to runqueue 0
(XEN) [2016-09-05 09:06:59.449] init_pcpu: Strange, cpu 4 already initialized!
(XEN) [2016-09-05 09:06:59.457] microcode: CPU4 updated from revision 0x10000bf to 0x10000dc
(XEN) [2016-09-05 09:06:59.466] csched2_vcpu_insert: Inserting IDLEv5
(XEN) [2016-09-05 09:06:59.475] Adding cpu 5 to runqueue 0
(XEN) [2016-09-05 09:06:59.504] init_pcpu: Strange, cpu 5 already initialized!
(XEN) [2016-09-05 09:06:59.513] Brought up 6 CPUs
(XEN) [2016-09-05 09:06:59.522] microcode: CPU5 updated from revision 0x10000bf to 0x10000dc
(XEN) [2016-09-05 09:06:59.555] ACPI sleep modes: S3
(XEN) [2016-09-05 09:06:59.564] VPMU: disabled
(XEN) [2016-09-05 09:06:59.573] MCA: Use hw thresholding to adjust polling frequency
(XEN) [2016-09-05 09:06:59.582] mcheck_poll: Machine check polling timer started.
(XEN) [2016-09-05 09:06:59.591] Xenoprofile: Failed to setup IBS LVT offset, IBSCTL = 0xffffffff
(XEN) [2016-09-05 09:06:59.601] Dom0 has maximum 632 PIRQs
(XEN) [2016-09-05 09:06:59.610] csched2_dom_init: Initializing domain 0
(XEN) [2016-09-05 09:06:59.619] csched2_vcpu_insert: Inserting d0v0
(XEN) [2016-09-05 09:06:59.628] NX (Execute Disable) protection active
(XEN) [2016-09-05 09:06:59.638] *** LOADING DOMAIN 0 ***
(XEN) [2016-09-05 09:06:59.813] elf_parse_binary: phdr: paddr=0x1000000 memsz=0x1059000
(XEN) [2016-09-05 09:06:59.823] elf_parse_binary: phdr: paddr=0x2200000 memsz=0x109000
(XEN) [2016-09-05 09:06:59.832] elf_parse_binary: phdr: paddr=0x2309000 memsz=0x18ea8
(XEN) [2016-09-05 09:06:59.842] elf_parse_binary: phdr: paddr=0x2322000 memsz=0x2de000
(XEN) [2016-09-05 09:06:59.851] elf_parse_binary: memory: 0x1000000 -> 0x2600000
(XEN) [2016-09-05 09:06:59.861] elf_xen_parse_note: GUEST_OS = "linux"
(XEN) [2016-09-05 09:06:59.871] elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) [2016-09-05 09:06:59.880] elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) [2016-09-05 09:06:59.890] elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) [2016-09-05 09:06:59.900] elf_xen_parse_note: INIT_P2M = 0x8000000000
(XEN) [2016-09-05 09:06:59.910] elf_xen_parse_note: ENTRY = 0xffffffff82322180
(XEN) [2016-09-05 09:06:59.920] elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) [2016-09-05 09:06:59.930] elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
(XEN) [2016-09-05 09:06:59.950] elf_xen_parse_note: SUPPORTED_FEATURES = 0x90d
(XEN) [2016-09-05 09:06:59.961] elf_xen_parse_note: PAE_MODE = "yes"
(XEN) [2016-09-05 09:06:59.971] elf_xen_parse_note: LOADER = "generic"
(XEN) [2016-09-05 09:06:59.982] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2016-09-05 09:06:59.993] elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) [2016-09-05 09:07:00.003] elf_xen_parse_note: MOD_START_PFN = 0x1
(XEN) [2016-09-05 09:07:00.014] elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) [2016-09-05 09:07:00.025] elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) [2016-09-05 09:07:00.036] elf_xen_addr_calc_check: addresses:
(XEN) [2016-09-05 09:07:00.047]     virt_base        = 0xffffffff80000000
(XEN) [2016-09-05 09:07:00.058]     elf_paddr_offset = 0x0
(XEN) [2016-09-05 09:07:00.069]     virt_offset      = 0xffffffff80000000
(XEN) [2016-09-05 09:07:00.080]     virt_kstart      = 0xffffffff81000000
(XEN) [2016-09-05 09:07:00.091]     virt_kend        = 0xffffffff82600000
(XEN) [2016-09-05 09:07:00.102]     virt_entry       = 0xffffffff82322180
(XEN) [2016-09-05 09:07:00.113]     p2m_base         = 0x8000000000
(XEN) [2016-09-05 09:07:00.124]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2016-09-05 09:07:00.135]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x2600000
(XEN) [2016-09-05 09:07:00.146] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2016-09-05 09:07:00.157]  Dom0 alloc.:   0000000548000000->000000054c000000 (370684 pages to be allocated)
(XEN) [2016-09-05 09:07:00.169]  Init. ramdisk: 000000055e7f8000->000000055fffb600
(XEN) [2016-09-05 09:07:00.180] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2016-09-05 09:07:00.191]  Loaded kernel: ffffffff81000000->ffffffff82600000
(XEN) [2016-09-05 09:07:00.202]  Init. ramdisk: 0000000000000000->0000000000000000
(XEN) [2016-09-05 09:07:00.213]  Phys-Mach map: 0000008000000000->0000008000300000
(XEN) [2016-09-05 09:07:00.225]  Start info:    ffffffff82600000->ffffffff826004b4
(XEN) [2016-09-05 09:07:00.236]  Page tables:   ffffffff82601000->ffffffff82618000
(XEN) [2016-09-05 09:07:00.247]  Boot stack:    ffffffff82618000->ffffffff82619000
(XEN) [2016-09-05 09:07:00.258]  TOTAL:         ffffffff80000000->ffffffff82800000
(XEN) [2016-09-05 09:07:00.269]  ENTRY ADDRESS: ffffffff82322180
(XEN) [2016-09-05 09:07:00.281] Dom0 has maximum 6 VCPUs
(XEN) [2016-09-05 09:07:00.292] csched2_vcpu_insert: Inserting d0v1
(XEN) [2016-09-05 09:07:00.303] csched2_vcpu_insert: Inserting d0v2
(XEN) [2016-09-05 09:07:00.314] csched2_vcpu_insert: Inserting d0v3
(XEN) [2016-09-05 09:07:00.325] csched2_vcpu_insert: Inserting d0v4
(XEN) [2016-09-05 09:07:00.336] csched2_vcpu_insert: Inserting d0v5
(XEN) [2016-09-05 09:07:00.347] elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff82059000
(XEN) [2016-09-05 09:07:00.365] elf_load_binary: phdr 1 at 0xffffffff82200000 -> 0xffffffff82309000
(XEN) [2016-09-05 09:07:00.377] elf_load_binary: phdr 2 at 0xffffffff82309000 -> 0xffffffff82321ea8
(XEN) [2016-09-05 09:07:00.388] elf_load_binary: phdr 3 at 0xffffffff82322000 -> 0xffffffff82431000
(XEN) [2016-09-05 09:07:01.537] AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.548] AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.560] AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.572] AMD-Vi: Setup I/O page table: device id = 0x18, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.584] AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.596] AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.609] AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.621] AMD-Vi: Setup I/O page table: device id = 0x50, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.634] AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.647] AMD-Vi: Setup I/O page table: device id = 0x60, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.660] AMD-Vi: Setup I/O page table: device id = 0x68, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.674] AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.687] AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.701] AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.714] AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.728] AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.742] AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.756] AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.771] AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.785] AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.800] AMD-Vi: Setup I/O page table: device id = 0xa8, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.814] AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.829] AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.844] AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) [2016-09-05 09:07:01.859] AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) [2016-09-05 09:07:01.874] AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) [2016-09-05 09:07:01.888] AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) [2016-09-05 09:07:01.903] AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) [2016-09-05 09:07:01.918] AMD-Vi: Setup I/O page table: device id = 0x400, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.933] AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.948] AMD-Vi: Setup I/O page table: device id = 0x608, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.964] AMD-Vi: Setup I/O page table: device id = 0x610, type = 0x2, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.979] AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:01.995] AMD-Vi: Setup I/O page table: device id = 0x800, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.011] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.027] AMD-Vi: Setup I/O page table: device id = 0x901, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.044] AMD-Vi: Setup I/O page table: device id = 0xa00, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.060] AMD-Vi: Setup I/O page table: device id = 0xb00, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.077] AMD-Vi: Setup I/O page table: device id = 0xc00, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.094] AMD-Vi: Setup I/O page table: device id = 0xd00, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.111] AMD-Vi: Setup I/O page table: device id = 0xe00, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.128] AMD-Vi: Setup I/O page table: device id = 0xe01, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.145] AMD-Vi: Setup I/O page table: device id = 0xf00, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.163] AMD-Vi: Setup I/O page table: device id = 0xf01, type = 0x1, root table = 0x54e59a000, domain = 0, paging mode = 3
(XEN) [2016-09-05 09:07:02.187] Scrubbing Free RAM on 1 nodes using 6 CPUs
(XEN) [2016-09-05 09:07:02.297] .............................done.
(XEN) [2016-09-05 09:07:05.393] Initial low memory virq threshold set at 0x4000 pages.
(XEN) [2016-09-05 09:07:05.410] Std. Loglevel: All
(XEN) [2016-09-05 09:07:05.428] Guest Loglevel: All
(XEN) [2016-09-05 09:07:05.445] Xen is relinquishing VGA console.
(XEN) [2016-09-05 09:07:05.547] *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) [2016-09-05 09:07:05.547] Freed 296kB init memory.
(XEN) [2016-09-05 09:07:05.732] traps.c:2681:d0v0 Domain attempted WRMSR 00000000c0010007 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) [2016-09-05 09:07:06.193] PCI add device 0000:00:00.0
(XEN) [2016-09-05 09:07:06.193] PCI add device 0000:00:00.2
(XEN) [2016-09-05 09:07:06.193] PCI add device 0000:00:02.0
(XEN) [2016-09-05 09:07:06.193] PCI add device 0000:00:03.0
(XEN) [2016-09-05 09:07:06.193] PCI add device 0000:00:05.0
(XEN) [2016-09-05 09:07:06.194] PCI add device 0000:00:06.0
(XEN) [2016-09-05 09:07:06.194] PCI add device 0000:00:09.0
(XEN) [2016-09-05 09:07:06.194] PCI add device 0000:00:0a.0
(XEN) [2016-09-05 09:07:06.194] PCI add device 0000:00:0b.0
(XEN) [2016-09-05 09:07:06.194] PCI add device 0000:00:0c.0
(XEN) [2016-09-05 09:07:06.195] PCI add device 0000:00:0d.0
(XEN) [2016-09-05 09:07:06.195] PCI add device 0000:00:11.0
(XEN) [2016-09-05 09:07:06.195] PCI add device 0000:00:12.0
(XEN) [2016-09-05 09:07:06.195] PCI add device 0000:00:12.2
(XEN) [2016-09-05 09:07:06.196] PCI add device 0000:00:13.0
(XEN) [2016-09-05 09:07:06.196] PCI add device 0000:00:13.2
(XEN) [2016-09-05 09:07:06.196] PCI add device 0000:00:14.0
(XEN) [2016-09-05 09:07:06.196] PCI add device 0000:00:14.3
(XEN) [2016-09-05 09:07:06.196] PCI add device 0000:00:14.4
(XEN) [2016-09-05 09:07:06.197] PCI add device 0000:00:14.5
(XEN) [2016-09-05 09:07:06.197] PCI add device 0000:00:15.0
(XEN) [2016-09-05 09:07:06.197] PCI add device 0000:00:16.0
(XEN) [2016-09-05 09:07:06.197] PCI add device 0000:00:16.2
(XEN) [2016-09-05 09:07:06.197] PCI add device 0000:00:18.0
(XEN) [2016-09-05 09:07:06.198] PCI add device 0000:00:18.1
(XEN) [2016-09-05 09:07:06.198] PCI add device 0000:00:18.2
(XEN) [2016-09-05 09:07:06.198] PCI add device 0000:00:18.3
(XEN) [2016-09-05 09:07:06.198] PCI add device 0000:00:18.4
(XEN) [2016-09-05 09:07:06.198] PCI add device 0000:0f:00.0
(XEN) [2016-09-05 09:07:06.198] PCI add device 0000:0f:00.1
(XEN) [2016-09-05 09:07:06.207] PCI add device 0000:0e:00.0
(XEN) [2016-09-05 09:07:06.207] PCI add device 0000:0e:00.1
(XEN) [2016-09-05 09:07:06.217] PCI add device 0000:0d:00.0
(XEN) [2016-09-05 09:07:06.227] PCI add device 0000:0c:00.0
(XEN) [2016-09-05 09:07:06.237] PCI add device 0000:0b:00.0
(XEN) [2016-09-05 09:07:06.247] PCI add device 0000:0a:00.0
(XEN) [2016-09-05 09:07:06.257] PCI add device 0000:09:00.0
(XEN) [2016-09-05 09:07:06.258] PCI add device 0000:09:00.1
(XEN) [2016-09-05 09:07:06.268] PCI add device 0000:05:00.0
(XEN) [2016-09-05 09:07:06.278] PCI add device 0000:06:01.0
(XEN) [2016-09-05 09:07:06.278] PCI add device 0000:06:02.0
(XEN) [2016-09-05 09:07:06.279] PCI add device 0000:08:00.0
(XEN) [2016-09-05 09:07:06.288] PCI add device 0000:07:00.0
(XEN) [2016-09-05 09:07:06.298] PCI add device 0000:04:00.0
(XEN) [2016-09-05 09:07:06.308] PCI add device 0000:03:06.0
(XEN) [2016-09-05 09:07:06.322] PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) [2016-09-05 09:07:11.859] d0: Forcing read-only access to MFN fed00
(XEN) [2016-09-05 09:09:35.204] csched2_dom_init: Initializing domain 1
(XEN) [2016-09-05 09:09:35.216] csched2_vcpu_insert: Inserting d1v0
(XEN) [2016-09-05 09:09:35.216] csched2_vcpu_insert: Inserting d1v1
(XEN) [2016-09-05 09:09:35.216] csched2_vcpu_insert: Inserting d1v2
(XEN) [2016-09-05 09:09:35.216] csched2_vcpu_insert: Inserting d1v3
(XEN) [2016-09-05 09:09:35.317] HVM1 save: CPU
(XEN) [2016-09-05 09:09:35.317] HVM1 save: PIC
(XEN) [2016-09-05 09:09:35.317] HVM1 save: IOAPIC
(XEN) [2016-09-05 09:09:35.317] HVM1 save: LAPIC
(XEN) [2016-09-05 09:09:35.317] HVM1 save: LAPIC_REGS
(XEN) [2016-09-05 09:09:35.317] HVM1 save: PCI_IRQ
(XEN) [2016-09-05 09:09:35.317] HVM1 save: ISA_IRQ
(XEN) [2016-09-05 09:09:35.317] HVM1 save: PCI_LINK
(XEN) [2016-09-05 09:09:35.317] HVM1 save: PIT
(XEN) [2016-09-05 09:09:35.317] HVM1 save: RTC
(XEN) [2016-09-05 09:09:35.317] HVM1 save: HPET
(XEN) [2016-09-05 09:09:35.317] HVM1 save: PMTIMER
(XEN) [2016-09-05 09:09:35.317] HVM1 save: MTRR
(XEN) [2016-09-05 09:09:35.317] HVM1 save: VIRIDIAN_DOMAIN
(XEN) [2016-09-05 09:09:35.317] HVM1 save: CPU_XSAVE
(XEN) [2016-09-05 09:09:35.317] HVM1 save: VIRIDIAN_VCPU
(XEN) [2016-09-05 09:09:35.317] HVM1 save: VMCE_VCPU
(XEN) [2016-09-05 09:09:35.317] HVM1 save: TSC_ADJUST
(XEN) [2016-09-05 09:09:35.317] HVM1 restore: CPU 0
(XEN) [2016-09-05 09:09:35.909] d0v0 Over-allocation for domain 1: 262401 > 262400
(XEN) [2016-09-05 09:09:35.909] memory.c:163:d0v0 Could not allocate order=0 extent: id=1 memflags=0 (192 of 512)
(XEN) [2016-09-05 09:09:38.422] csched2_vcpu_insert: Inserting d1v0
(XEN) [2016-09-05 09:09:38.422] csched2_vcpu_insert: Inserting d1v1
(XEN) [2016-09-05 09:09:38.422] csched2_vcpu_insert: Inserting d1v2
(XEN) [2016-09-05 09:09:38.422] csched2_vcpu_insert: Inserting d1v3

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05 10:02         ` linux
@ 2016-09-05 10:25           ` Jan Beulich
  2016-09-05 11:19             ` linux
  0 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2016-09-05 10:25 UTC (permalink / raw)
  To: linux; +Cc: WeiLiu, Doug Goldstein, xen-devel

>>> On 05.09.16 at 12:02, <linux@eikelenboom.it> wrote:
> On 2016-09-05 11:46, Jan Beulich wrote:
>>>>> On 05.09.16 at 11:20, <linux@eikelenboom.it> wrote:
>>> Hmm it seems my thread was kind of hijacked and i was dropped from the
>>> CC.
>>> 
>>> I had some time and bisected the issue and it resulted in:
>>> 
>>> 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f is the first bad commit
>>> commit 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f
>>> Author: Jan Beulich <jbeulich@suse.com>
>>> Date:   Wed Oct 21 10:56:31 2015 +0200
>>> 
>>>      x86/shadow: drop stray name tags from 
>>> sh_{guest_get,map}_eff_l1e()
>> 
>> Hmm, as Wei already indicated - that's rather odd. The commit isn't
>> really supposed to have any effect on functionality (and going
>> through it again I also can't spot any now). And are you indeed
>> using shadow mode, and if so does your problem not occur when
>> you use HAP instead?
>> 
>> In any event, if there was some hidden (and unintended) change
>> in functionality here, then the most likely result would seem to be
>> a crash, yet from the log fragment you posted it doesn't look like
>> there's _any_ relevant hypervisor output.
> 
> Hmm i was already afraid of that.
> Attached is the output of xl dmesg, HAP is supported and should be 
> enabled by default (and i didn't disable it explicitly in my guest.cfg).
> 
> I just tried the opposite and specified hap=0 in my guest.cfg and this 
> case leads to 2 lines of additional output:
> 
> XEN) [2016-09-05 09:58:22.201] sh error: sh_remove_all_mappings(): can't 
> find all mappings of mfn 471b69: c=8000000000000003 t=7400000000000001
> (XEN) [2016-09-05 09:58:22.201] sh error: sh_remove_all_mappings(): 
> can't find all mappings of mfn 471b68: c=8000000000000003 
> t=7400000000000001

And these two messages are relevant here? I.e. do they go away
when you use a commit ahead of the one your bisect spotted?

Anyway - with you quite clearly having used HAP before, I can't
see how this commit would matter for you at all. In case you want
to double check you could try with a hypervisor built without
shadow paging code (which we've been allowing for quite a
while).

Is it possible that the reproduction of the issue isn't 100% reliable?
I.e. did you verify with a couple of runs each that it really is this
commit, and not just some spurious effect? If it is, then from all I
know so far I'd suspect an effect from code / data arrangement
rather than the commit itself to be the actual culprit. Which reminds
me of another possible way of double checking: If said commit
reverts reasonably cleanly at the tip of staging or master, maybe
you could try with just this change reverted, instead of with
everything subsequent to it reverted too?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05 10:25           ` Jan Beulich
@ 2016-09-05 11:19             ` linux
  2016-09-05 11:43               ` Jan Beulich
  2016-10-13 14:43               ` Sander Eikelenboom
  0 siblings, 2 replies; 24+ messages in thread
From: linux @ 2016-09-05 11:19 UTC (permalink / raw)
  To: Jan Beulich; +Cc: WeiLiu, Doug Goldstein, xen-devel

On 2016-09-05 12:25, Jan Beulich wrote:
>>>> On 05.09.16 at 12:02, <linux@eikelenboom.it> wrote:
>> On 2016-09-05 11:46, Jan Beulich wrote:
>>>>>> On 05.09.16 at 11:20, <linux@eikelenboom.it> wrote:
>>>> Hmm it seems my thread was kind of hijacked and i was dropped from 
>>>> the
>>>> CC.
>>>> 
>>>> I had some time and bisected the issue and it resulted in:
>>>> 
>>>> 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f is the first bad commit
>>>> commit 5a3ce8f85e7e7bdd339d259daa19f6bc5cb4735f
>>>> Author: Jan Beulich <jbeulich@suse.com>
>>>> Date:   Wed Oct 21 10:56:31 2015 +0200
>>>> 
>>>>      x86/shadow: drop stray name tags from
>>>> sh_{guest_get,map}_eff_l1e()
>>> 
>>> Hmm, as Wei already indicated - that's rather odd. The commit isn't
>>> really supposed to have any effect on functionality (and going
>>> through it again I also can't spot any now). And are you indeed
>>> using shadow mode, and if so does your problem not occur when
>>> you use HAP instead?
>>> 
>>> In any event, if there was some hidden (and unintended) change
>>> in functionality here, then the most likely result would seem to be
>>> a crash, yet from the log fragment you posted it doesn't look like
>>> there's _any_ relevant hypervisor output.
>> 
>> Hmm i was already afraid of that.
>> Attached is the output of xl dmesg, HAP is supported and should be
>> enabled by default (and i didn't disable it explicitly in my 
>> guest.cfg).
>> 
>> I just tried the opposite and specified hap=0 in my guest.cfg and this
>> case leads to 2 lines of additional output:
>> 
>> XEN) [2016-09-05 09:58:22.201] sh error: sh_remove_all_mappings(): 
>> can't
>> find all mappings of mfn 471b69: c=8000000000000003 t=7400000000000001
>> (XEN) [2016-09-05 09:58:22.201] sh error: sh_remove_all_mappings():
>> can't find all mappings of mfn 471b68: c=8000000000000003
>> t=7400000000000001
> 
> And these two messages are relevant here? I.e. do they go away
> when you use a commit ahead of the one your bisect spotted?

Just double checked with a build one commit ahead of the culprit the 
bisection reported and hap=0,
and those messages are there as well and the guest boots fine now.
So they don't seem to be relevant.

> Anyway - with you quite clearly having used HAP before, I can't
> see how this commit would matter for you at all. In case you want
> to double check you could try with a hypervisor built without
> shadow paging code (which we've been allowing for quite a
> while).

I just tried that and without shadow paging code the guest boots fine, 
so that's
interesting.

> Is it possible that the reproduction of the issue isn't 100% reliable?

Nope it seems 100% reliable.

> I.e. did you verify with a couple of runs each that it really is this
> commit, and not just some spurious effect? If it is, then from all I
> know so far I'd suspect an effect from code / data arrangement
> rather than the commit itself to be the actual culprit.

Well at least there is one other independent user running into the same 
issue,
so it doesn't seem specifically related to my machine or my builds.

It also happens when running all my guests (and this is the last to 
start) and with only this guest.

> Which reminds
> me of another possible way of double checking: If said commit
> reverts reasonably cleanly at the tip of staging or master, maybe
> you could try with just this change reverted, instead of with
> everything subsequent to it reverted too?

Nope it tried that already and it didn't revert cleanly (and i didn't 
see how to correctly fix it up).

--
Sander

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05 11:19             ` linux
@ 2016-09-05 11:43               ` Jan Beulich
  2016-09-05 12:00                 ` linux
  2016-10-13 14:43               ` Sander Eikelenboom
  1 sibling, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2016-09-05 11:43 UTC (permalink / raw)
  To: linux; +Cc: WeiLiu, Doug Goldstein, xen-devel

>>> On 05.09.16 at 13:19, <linux@eikelenboom.it> wrote:
> On 2016-09-05 12:25, Jan Beulich wrote:
>> Anyway - with you quite clearly having used HAP before, I can't
>> see how this commit would matter for you at all. In case you want
>> to double check you could try with a hypervisor built without
>> shadow paging code (which we've been allowing for quite a
>> while).
> 
> I just tried that and without shadow paging code the guest boots fine, 
> so that's interesting.

Indeed. Was that try with plain staging/master, or with much of
the reverts in place (from the bisection)? It seems to me that
investigating this odd difference would perhaps be a better
route than trying to guess what's wrong with said commit.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05 11:43               ` Jan Beulich
@ 2016-09-05 12:00                 ` linux
  0 siblings, 0 replies; 24+ messages in thread
From: linux @ 2016-09-05 12:00 UTC (permalink / raw)
  To: Jan Beulich; +Cc: WeiLiu, Doug Goldstein, xen-devel

On 2016-09-05 13:43, Jan Beulich wrote:
>>>> On 05.09.16 at 13:19, <linux@eikelenboom.it> wrote:
>> On 2016-09-05 12:25, Jan Beulich wrote:
>>> Anyway - with you quite clearly having used HAP before, I can't
>>> see how this commit would matter for you at all. In case you want
>>> to double check you could try with a hypervisor built without
>>> shadow paging code (which we've been allowing for quite a
>>> while).
>> 
>> I just tried that and without shadow paging code the guest boots fine,
>> so that's interesting.
> 
> Indeed. Was that try with plain staging/master, or with much of
> the reverts in place (from the bisection)? It seems to me that
> investigating this odd difference would perhaps be a better
> route than trying to guess what's wrong with said commit.
> 
> Jan

It was a try with a tree at the culprit commit and editted 
xen/arch/x86/Rules.mk to disable the shadow paging code from being 
build.

Now just tried with unstable and using Kconfig, but with that build the 
guest doesn't boot.
So
or the KConfig option doesn't work
or the reliability isn't 100% afterall (but i should have noticed that 
earlier on i would say)
or there is something else (semantics around the disabling changed ?)

*sigh*, seems it's not going to be an easy one :-\

My /boot/xen-4.8-unstable.config gives:
#
# Architecture Features
#
CONFIG_NR_CPUS=256
# CONFIG_SHADOW_PAGING is not set
# CONFIG_BIGMEM is not set
CONFIG_HVM_FEP=y
CONFIG_TBOOT=y

So it should be off i guess.

--
Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-09-05 11:19             ` linux
  2016-09-05 11:43               ` Jan Beulich
@ 2016-10-13 14:43               ` Sander Eikelenboom
  2016-10-17 15:28                 ` Sander Eikelenboom
  1 sibling, 1 reply; 24+ messages in thread
From: Sander Eikelenboom @ 2016-10-13 14:43 UTC (permalink / raw)
  To: linux; +Cc: WeiLiu, Doug Goldstein, Jan Beulich, xen-devel

Hi Jan / Wei,

Took a while before i had the chance to fiddle some more to find the actual culprit.
After analyzing the output of xl -vvvvv create somewhat more i came to the 
insight it was probably Qemu and not Xen causing the fault.

As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
direct kernel boot mode on xen-unstable. And that old qemu binary works fine.

After testing i can conclude, Jan was right, the bisection was a red herring,
the problem is caused by some change in Qemu and not by something in the Xen tree.
(strange thing is that for as far as i know i did a "make distclean" between 
every build (taking a lot of time), which should have pulled a fresh qemu-xen 
tree and therefor the bisection should have lead to a commit with a Config.mk 
hash change for qemu-xen version.)

Will see if i can find some more time and bisect qemu and find the culprit.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-13 14:43               ` Sander Eikelenboom
@ 2016-10-17 15:28                 ` Sander Eikelenboom
  2016-10-18 12:48                   ` Wei Liu
  0 siblings, 1 reply; 24+ messages in thread
From: Sander Eikelenboom @ 2016-10-17 15:28 UTC (permalink / raw)
  To: WeiLiu; +Cc: Doug Goldstein, Jan Beulich, xen-devel

Thursday, October 13, 2016, 4:43:31 PM, you wrote:

> Hi Jan / Wei,

> Took a while before i had the chance to fiddle some more to find the actual culprit.
> After analyzing the output of xl -vvvvv create somewhat more i came to the 
> insight it was probably Qemu and not Xen causing the fault.

> As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
> direct kernel boot mode on xen-unstable. And that old qemu binary works fine.

> After testing i can conclude, Jan was right, the bisection was a red herring,
> the problem is caused by some change in Qemu and not by something in the Xen tree.
> (strange thing is that for as far as i know i did a "make distclean" between 
> every build (taking a lot of time), which should have pulled a fresh qemu-xen 
> tree and therefor the bisection should have lead to a commit with a Config.mk 
> hash change for qemu-xen version.)

> Will see if i can find some more time and bisect qemu and find the culprit.

> --
> Sander


Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
issue with my present git-foo.

The first try with bisection of the whole xen-tree seems to have hit the issue that the 
qemu-revision that gets pulled on a fresh build is "master" during the whole
dev period. That creates havoc when trying to bisect, since you are testing 
combinations that were never developed (nor auto tested) in that combination
(especially when a xen-tree and qemu-tree change have a dependency like Roger's 
"xen: fix usage of xc_domain_create in domain builder")

While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
seabios on rel-1.8.2) it get stuck on issues with that tree.
Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
been a lot of merges going back and forth and to me it seems a mess (but as i 
said it could also be a lack of git-foo). I tried by manual bisecting, removing 
and cloning trees again etc. but that doesn't suffice, it's all going no-where.
(while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
seem to be some random problem)

So perhaps some dev can at least verify that the issue is there (since 4.7.0)
and put it on the "known broken" list of things.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-17 15:28                 ` Sander Eikelenboom
@ 2016-10-18 12:48                   ` Wei Liu
  2016-10-18 21:32                     ` Håkon Alstadheim
  2016-10-25 11:24                     ` Wei Liu
  0 siblings, 2 replies; 24+ messages in thread
From: Wei Liu @ 2016-10-18 12:48 UTC (permalink / raw)
  To: Sander Eikelenboom; +Cc: Doug Goldstein, WeiLiu, Jan Beulich, xen-devel

On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
> Thursday, October 13, 2016, 4:43:31 PM, you wrote:
> 
> > Hi Jan / Wei,
> 
> > Took a while before i had the chance to fiddle some more to find the actual culprit.
> > After analyzing the output of xl -vvvvv create somewhat more i came to the 
> > insight it was probably Qemu and not Xen causing the fault.
> 
> > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
> > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
> 
> > After testing i can conclude, Jan was right, the bisection was a red herring,
> > the problem is caused by some change in Qemu and not by something in the Xen tree.
> > (strange thing is that for as far as i know i did a "make distclean" between 
> > every build (taking a lot of time), which should have pulled a fresh qemu-xen 
> > tree and therefor the bisection should have lead to a commit with a Config.mk 
> > hash change for qemu-xen version.)
> 
> > Will see if i can find some more time and bisect qemu and find the culprit.
> 
> > --
> > Sander
> 
> 
> Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
> issue with my present git-foo.
> 
> The first try with bisection of the whole xen-tree seems to have hit the issue that the 
> qemu-revision that gets pulled on a fresh build is "master" during the whole
> dev period. That creates havoc when trying to bisect, since you are testing 
> combinations that were never developed (nor auto tested) in that combination
> (especially when a xen-tree and qemu-tree change have a dependency like Roger's 
> "xen: fix usage of xc_domain_create in domain builder")
> 
> While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
> seabios on rel-1.8.2) it get stuck on issues with that tree.
> Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
> to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
> been a lot of merges going back and forth and to me it seems a mess (but as i 
> said it could also be a lack of git-foo). I tried by manual bisecting, removing 
> and cloning trees again etc. but that doesn't suffice, it's all going no-where.
> (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
> seem to be some random problem)
> 

Thanks for trying.

> So perhaps some dev can at least verify that the issue is there (since 4.7.0)
> and put it on the "known broken" list of things.
> 

I will put this into the list of things I need to look at.

Wei.

> --
> Sander
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-18 12:48                   ` Wei Liu
@ 2016-10-18 21:32                     ` Håkon Alstadheim
  2016-10-25 11:24                     ` Wei Liu
  1 sibling, 0 replies; 24+ messages in thread
From: Håkon Alstadheim @ 2016-10-18 21:32 UTC (permalink / raw)
  To: xen-devel

Den 18. okt. 2016 14:48, skrev Wei Liu:
> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
>> Thursday, October 13, 2016, 4:43:31 PM, you wrote:
>>
>>> Hi Jan / Wei,
>>> Took a while before i had the chance to fiddle some more to find the actual culprit.
>>> After analyzing the output of xl -vvvvv create somewhat more i came to the 
>>> insight it was probably Qemu and not Xen causing the fault.
>>> As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
>>> direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
>>> After testing i can conclude, Jan was right, the bisection was a red herring,
>>> the problem is caused by some change in Qemu and not by something in the Xen tree.
>>> (strange thing is that for as far as i know i did a "make distclean" between 
>>> every build (taking a lot of time), which should have pulled a fresh qemu-xen 
>>> tree and therefor the bisection should have lead to a commit with a Config.mk 
>>> hash change for qemu-xen version.)
>>> Will see if i can find some more time and bisect qemu and find the culprit.
>>> --
>>> Sander
>>
>> Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
>> issue with my present git-foo.
>>
>> The first try with bisection of the whole xen-tree seems to have hit the issue that the 
>> qemu-revision that gets pulled on a fresh build is "master" during the whole
>> dev period. That creates havoc when trying to bisect, since you are testing 
>> combinations that were never developed (nor auto tested) in that combination
>> (especially when a xen-tree and qemu-tree change have a dependency like Roger's 
>> "xen: fix usage of xc_domain_create in domain builder")
>>
>> While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
>> seabios on rel-1.8.2) it get stuck on issues with that tree.
>> Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
>> to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
>> been a lot of merges going back and forth and to me it seems a mess (but as i 
>> said it could also be a lack of git-foo). I tried by manual bisecting, removing 
>> and cloning trees again etc. but that doesn't suffice, it's all going no-where.
>> (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
>> seem to be some random problem)
>>
> Thanks for trying.
>
>> So perhaps some dev can at least verify that the issue is there (since 4.7.0)
>> and put it on the "known broken" list of things.
>>
> I will put this into the list of things I need to look at.
>
> Wei.
>
In the mean time, a viable work-around is to use pxe boot if one needs
external boot for hvm under xen-4.7.

Still, the effort is appreciated :-).
Regards, Håkon.

P.S: I had some difficulty with pxe-boot and serial console, feel free
to email me direct if anyone wants to compare notes.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-18 12:48                   ` Wei Liu
  2016-10-18 21:32                     ` Håkon Alstadheim
@ 2016-10-25 11:24                     ` Wei Liu
  2016-10-25 11:37                       ` Sander Eikelenboom
  1 sibling, 1 reply; 24+ messages in thread
From: Wei Liu @ 2016-10-25 11:24 UTC (permalink / raw)
  To: Sander Eikelenboom; +Cc: Doug Goldstein, WeiLiu, Jan Beulich, xen-devel

On Tue, Oct 18, 2016 at 01:48:23PM +0100, Wei Liu wrote:
> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
> > Thursday, October 13, 2016, 4:43:31 PM, you wrote:
> > 
> > > Hi Jan / Wei,
> > 
> > > Took a while before i had the chance to fiddle some more to find the actual culprit.
> > > After analyzing the output of xl -vvvvv create somewhat more i came to the 
> > > insight it was probably Qemu and not Xen causing the fault.
> > 
> > > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
> > > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
> > 
> > > After testing i can conclude, Jan was right, the bisection was a red herring,
> > > the problem is caused by some change in Qemu and not by something in the Xen tree.
> > > (strange thing is that for as far as i know i did a "make distclean" between 
> > > every build (taking a lot of time), which should have pulled a fresh qemu-xen 
> > > tree and therefor the bisection should have lead to a commit with a Config.mk 
> > > hash change for qemu-xen version.)
> > 
> > > Will see if i can find some more time and bisect qemu and find the culprit.
> > 
> > > --
> > > Sander
> > 
> > 
> > Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
> > issue with my present git-foo.
> > 
> > The first try with bisection of the whole xen-tree seems to have hit the issue that the 
> > qemu-revision that gets pulled on a fresh build is "master" during the whole
> > dev period. That creates havoc when trying to bisect, since you are testing 
> > combinations that were never developed (nor auto tested) in that combination
> > (especially when a xen-tree and qemu-tree change have a dependency like Roger's 
> > "xen: fix usage of xc_domain_create in domain builder")
> > 
> > While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
> > seabios on rel-1.8.2) it get stuck on issues with that tree.
> > Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
> > to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
> > been a lot of merges going back and forth and to me it seems a mess (but as i 
> > said it could also be a lack of git-foo). I tried by manual bisecting, removing 
> > and cloning trees again etc. but that doesn't suffice, it's all going no-where.
> > (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
> > seem to be some random problem)
> > 
> 
> Thanks for trying.
> 
> > So perhaps some dev can at least verify that the issue is there (since 4.7.0)
> > and put it on the "known broken" list of things.
> > 
> 
> I will put this into the list of things I need to look at.
> 

I investigated this a bit. The root cause is the memory accounting is
wrong in QEMU. It would try to allocate more ram than allowed. I haven't
tried to figure out exactly what is wrong, though.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-25 11:24                     ` Wei Liu
@ 2016-10-25 11:37                       ` Sander Eikelenboom
  2016-10-25 14:49                         ` Wei Liu
  0 siblings, 1 reply; 24+ messages in thread
From: Sander Eikelenboom @ 2016-10-25 11:37 UTC (permalink / raw)
  To: Wei Liu; +Cc: Doug Goldstein, Jan Beulich, xen-devel


Tuesday, October 25, 2016, 1:24:12 PM, you wrote:

> On Tue, Oct 18, 2016 at 01:48:23PM +0100, Wei Liu wrote:
>> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
>> > Thursday, October 13, 2016, 4:43:31 PM, you wrote:
>> > 
>> > > Hi Jan / Wei,
>> > 
>> > > Took a while before i had the chance to fiddle some more to find the actual culprit.
>> > > After analyzing the output of xl -vvvvv create somewhat more i came to the 
>> > > insight it was probably Qemu and not Xen causing the fault.
>> > 
>> > > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
>> > > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
>> > 
>> > > After testing i can conclude, Jan was right, the bisection was a red herring,
>> > > the problem is caused by some change in Qemu and not by something in the Xen tree.
>> > > (strange thing is that for as far as i know i did a "make distclean" between 
>> > > every build (taking a lot of time), which should have pulled a fresh qemu-xen 
>> > > tree and therefor the bisection should have lead to a commit with a Config.mk 
>> > > hash change for qemu-xen version.)
>> > 
>> > > Will see if i can find some more time and bisect qemu and find the culprit.
>> > 
>> > > --
>> > > Sander
>> > 
>> > 
>> > Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
>> > issue with my present git-foo.
>> > 
>> > The first try with bisection of the whole xen-tree seems to have hit the issue that the 
>> > qemu-revision that gets pulled on a fresh build is "master" during the whole
>> > dev period. That creates havoc when trying to bisect, since you are testing 
>> > combinations that were never developed (nor auto tested) in that combination
>> > (especially when a xen-tree and qemu-tree change have a dependency like Roger's 
>> > "xen: fix usage of xc_domain_create in domain builder")
>> > 
>> > While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
>> > seabios on rel-1.8.2) it get stuck on issues with that tree.
>> > Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
>> > to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
>> > been a lot of merges going back and forth and to me it seems a mess (but as i 
>> > said it could also be a lack of git-foo). I tried by manual bisecting, removing 
>> > and cloning trees again etc. but that doesn't suffice, it's all going no-where.
>> > (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
>> > seem to be some random problem)
>> > 
>> 
>> Thanks for trying.
>> 
>> > So perhaps some dev can at least verify that the issue is there (since 4.7.0)
>> > and put it on the "known broken" list of things.
>> > 
>> 
>> I will put this into the list of things I need to look at.
>> 

> I investigated this a bit. The root cause is the memory accounting is
> wrong in QEMU. It would try to allocate more ram than allowed. I haven't
> tried to figure out exactly what is wrong, though.

That confirms what i was thinking in the end, but bisection the qemu-tree 
changes between the xen-4.6.0 and xen-4.7.0 release proved to be pretty 
difficult as i explained. So i you have a hunch as to in what code it should 
reside debugging instead of bisecting would probably be better.
(so one of the questions is what changes in the memory accounting when you
supply the kernel from the host instead of the guest, since booting a kernel
with grub from within the guest doesn't give any memory accounting issues.) 

Thanks for investigating !
--

Sander

> Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-25 11:37                       ` Sander Eikelenboom
@ 2016-10-25 14:49                         ` Wei Liu
  2016-10-25 15:00                           ` Wei Liu
  2016-10-25 17:25                           ` Sander Eikelenboom
  0 siblings, 2 replies; 24+ messages in thread
From: Wei Liu @ 2016-10-25 14:49 UTC (permalink / raw)
  To: Sander Eikelenboom; +Cc: Doug Goldstein, Wei Liu, Jan Beulich, xen-devel

On Tue, Oct 25, 2016 at 01:37:45PM +0200, Sander Eikelenboom wrote:
> 
> Tuesday, October 25, 2016, 1:24:12 PM, you wrote:
> 
> > On Tue, Oct 18, 2016 at 01:48:23PM +0100, Wei Liu wrote:
> >> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
> >> > Thursday, October 13, 2016, 4:43:31 PM, you wrote:
> >> > 
> >> > > Hi Jan / Wei,
> >> > 
> >> > > Took a while before i had the chance to fiddle some more to find the actual culprit.
> >> > > After analyzing the output of xl -vvvvv create somewhat more i came to the 
> >> > > insight it was probably Qemu and not Xen causing the fault.
> >> > 
> >> > > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
> >> > > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
> >> > 
> >> > > After testing i can conclude, Jan was right, the bisection was a red herring,
> >> > > the problem is caused by some change in Qemu and not by something in the Xen tree.
> >> > > (strange thing is that for as far as i know i did a "make distclean" between 
> >> > > every build (taking a lot of time), which should have pulled a fresh qemu-xen 
> >> > > tree and therefor the bisection should have lead to a commit with a Config.mk 
> >> > > hash change for qemu-xen version.)
> >> > 
> >> > > Will see if i can find some more time and bisect qemu and find the culprit.
> >> > 
> >> > > --
> >> > > Sander
> >> > 
> >> > 
> >> > Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
> >> > issue with my present git-foo.
> >> > 
> >> > The first try with bisection of the whole xen-tree seems to have hit the issue that the 
> >> > qemu-revision that gets pulled on a fresh build is "master" during the whole
> >> > dev period. That creates havoc when trying to bisect, since you are testing 
> >> > combinations that were never developed (nor auto tested) in that combination
> >> > (especially when a xen-tree and qemu-tree change have a dependency like Roger's 
> >> > "xen: fix usage of xc_domain_create in domain builder")
> >> > 
> >> > While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
> >> > seabios on rel-1.8.2) it get stuck on issues with that tree.
> >> > Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
> >> > to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
> >> > been a lot of merges going back and forth and to me it seems a mess (but as i 
> >> > said it could also be a lack of git-foo). I tried by manual bisecting, removing 
> >> > and cloning trees again etc. but that doesn't suffice, it's all going no-where.
> >> > (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
> >> > seem to be some random problem)
> >> > 
> >> 
> >> Thanks for trying.
> >> 
> >> > So perhaps some dev can at least verify that the issue is there (since 4.7.0)
> >> > and put it on the "known broken" list of things.
> >> > 
> >> 
> >> I will put this into the list of things I need to look at.
> >> 
> 
> > I investigated this a bit. The root cause is the memory accounting is
> > wrong in QEMU. It would try to allocate more ram than allowed. I haven't
> > tried to figure out exactly what is wrong, though.
> 
> That confirms what i was thinking in the end, but bisection the qemu-tree 
> changes between the xen-4.6.0 and xen-4.7.0 release proved to be pretty 
> difficult as i explained. So i you have a hunch as to in what code it should 
> reside debugging instead of bisecting would probably be better.
> (so one of the questions is what changes in the memory accounting when you
> supply the kernel from the host instead of the guest, since booting a kernel
> with grub from within the guest doesn't give any memory accounting issues.) 
> 
> Thanks for investigating !

I think I hunted down the offending function.

Mind trying this patch for me?

---8<---
From 3c7f8b55109959cf470deeee452f452f7c0ade51 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Tue, 25 Oct 2016 15:45:04 +0100
Subject: [PATCH] acpi: don't build acpi tables for xen guests

Xen's toolstack is in charge of building ACPI tables. Skip acpi table
building if running on Xen.

This issue is discovered due to direct kernel boot on Xen doesn't boot
anymore, because the new ACPI tables cause the guest to exceed its
memory allocation limit.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>

RFC because I'm not sure this is the best way to fix it.
---
 hw/i386/acpi-build.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index a26a4bb..6ba5031 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -45,6 +45,7 @@
 #include "sysemu/tpm_backend.h"
 #include "hw/timer/mc146818rtc_regs.h"
 #include "sysemu/numa.h"
+#include "hw/xen/xen.h"
 
 /* Supported chipsets: */
 #include "hw/acpi/piix4.h"
@@ -2865,6 +2866,12 @@ void acpi_setup(void)
         return;
     }
 
+    if (xen_enabled()) {
+        fprintf(stderr, "%s %d\n", __FILE__, __LINE__);
+        ACPI_BUILD_DPRINTF("Xen enabled. Bailing out.\n");
+        return;
+    }
+
     build_state = g_malloc0(sizeof *build_state);
 
     acpi_set_pci_info();
-- 
2.1.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-25 14:49                         ` Wei Liu
@ 2016-10-25 15:00                           ` Wei Liu
  2016-10-25 17:25                           ` Sander Eikelenboom
  1 sibling, 0 replies; 24+ messages in thread
From: Wei Liu @ 2016-10-25 15:00 UTC (permalink / raw)
  To: Sander Eikelenboom; +Cc: Doug Goldstein, Wei Liu, Jan Beulich, xen-devel

On Tue, Oct 25, 2016 at 03:49:59PM +0100, Wei Liu wrote:
> On Tue, Oct 25, 2016 at 01:37:45PM +0200, Sander Eikelenboom wrote:
> > 
> > Tuesday, October 25, 2016, 1:24:12 PM, you wrote:
> > 
> > > On Tue, Oct 18, 2016 at 01:48:23PM +0100, Wei Liu wrote:
> > >> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
> > >> > Thursday, October 13, 2016, 4:43:31 PM, you wrote:
> > >> > 
> > >> > > Hi Jan / Wei,
> > >> > 
> > >> > > Took a while before i had the chance to fiddle some more to find the actual culprit.
> > >> > > After analyzing the output of xl -vvvvv create somewhat more i came to the 
> > >> > > insight it was probably Qemu and not Xen causing the fault.
> > >> > 
> > >> > > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
> > >> > > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
> > >> > 
> > >> > > After testing i can conclude, Jan was right, the bisection was a red herring,
> > >> > > the problem is caused by some change in Qemu and not by something in the Xen tree.
> > >> > > (strange thing is that for as far as i know i did a "make distclean" between 
> > >> > > every build (taking a lot of time), which should have pulled a fresh qemu-xen 
> > >> > > tree and therefor the bisection should have lead to a commit with a Config.mk 
> > >> > > hash change for qemu-xen version.)
> > >> > 
> > >> > > Will see if i can find some more time and bisect qemu and find the culprit.
> > >> > 
> > >> > > --
> > >> > > Sander
> > >> > 
> > >> > 
> > >> > Unfortunately i have to give up on this issue, for me it's impossible to bisect this 
> > >> > issue with my present git-foo.
> > >> > 
> > >> > The first try with bisection of the whole xen-tree seems to have hit the issue that the 
> > >> > qemu-revision that gets pulled on a fresh build is "master" during the whole
> > >> > dev period. That creates havoc when trying to bisect, since you are testing 
> > >> > combinations that were never developed (nor auto tested) in that combination
> > >> > (especially when a xen-tree and qemu-tree change have a dependency like Roger's 
> > >> > "xen: fix usage of xc_domain_create in domain builder")
> > >> > 
> > >> > While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and 
> > >> > seabios on rel-1.8.2) it get stuck on issues with that tree.
> > >> > Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
> > >> > to git://xenbits.xen.org/qemu-xen.git),after that there seem to have 
> > >> > been a lot of merges going back and forth and to me it seems a mess (but as i 
> > >> > said it could also be a lack of git-foo). I tried by manual bisecting, removing 
> > >> > and cloning trees again etc. but that doesn't suffice, it's all going no-where.
> > >> > (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't 
> > >> > seem to be some random problem)
> > >> > 
> > >> 
> > >> Thanks for trying.
> > >> 
> > >> > So perhaps some dev can at least verify that the issue is there (since 4.7.0)
> > >> > and put it on the "known broken" list of things.
> > >> > 
> > >> 
> > >> I will put this into the list of things I need to look at.
> > >> 
> > 
> > > I investigated this a bit. The root cause is the memory accounting is
> > > wrong in QEMU. It would try to allocate more ram than allowed. I haven't
> > > tried to figure out exactly what is wrong, though.
> > 
> > That confirms what i was thinking in the end, but bisection the qemu-tree 
> > changes between the xen-4.6.0 and xen-4.7.0 release proved to be pretty 
> > difficult as i explained. So i you have a hunch as to in what code it should 
> > reside debugging instead of bisecting would probably be better.
> > (so one of the questions is what changes in the memory accounting when you
> > supply the kernel from the host instead of the guest, since booting a kernel
> > with grub from within the guest doesn't give any memory accounting issues.) 
> > 
> > Thanks for investigating !
> 
> I think I hunted down the offending function.
> 
> Mind trying this patch for me?
> 
> ---8<---
> From 3c7f8b55109959cf470deeee452f452f7c0ade51 Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Tue, 25 Oct 2016 15:45:04 +0100
> Subject: [PATCH] acpi: don't build acpi tables for xen guests
> 
> Xen's toolstack is in charge of building ACPI tables. Skip acpi table
> building if running on Xen.
> 
> This issue is discovered due to direct kernel boot on Xen doesn't boot
> anymore, because the new ACPI tables cause the guest to exceed its
> memory allocation limit.
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> 
> RFC because I'm not sure this is the best way to fix it.
> ---
>  hw/i386/acpi-build.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index a26a4bb..6ba5031 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -45,6 +45,7 @@
>  #include "sysemu/tpm_backend.h"
>  #include "hw/timer/mc146818rtc_regs.h"
>  #include "sysemu/numa.h"
> +#include "hw/xen/xen.h"
>  
>  /* Supported chipsets: */
>  #include "hw/acpi/piix4.h"
> @@ -2865,6 +2866,12 @@ void acpi_setup(void)
>          return;
>      }
>  
> +    if (xen_enabled()) {
> +        fprintf(stderr, "%s %d\n", __FILE__, __LINE__);

Oops, this is just debug output - but you get the idea.

> +        ACPI_BUILD_DPRINTF("Xen enabled. Bailing out.\n");
> +        return;
> +    }
> +
>      build_state = g_malloc0(sizeof *build_state);
>  
>      acpi_set_pci_info();
> -- 
> 2.1.4
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-25 14:49                         ` Wei Liu
  2016-10-25 15:00                           ` Wei Liu
@ 2016-10-25 17:25                           ` Sander Eikelenboom
  2016-10-25 17:26                             ` Wei Liu
  1 sibling, 1 reply; 24+ messages in thread
From: Sander Eikelenboom @ 2016-10-25 17:25 UTC (permalink / raw)
  To: Wei Liu; +Cc: Doug Goldstein, Jan Beulich, xen-devel

On 2016-10-25 16:49, Wei Liu wrote:
> On Tue, Oct 25, 2016 at 01:37:45PM +0200, Sander Eikelenboom wrote:
>> 
>> Tuesday, October 25, 2016, 1:24:12 PM, you wrote:
>> 
>> > On Tue, Oct 18, 2016 at 01:48:23PM +0100, Wei Liu wrote:
>> >> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
>> >> > Thursday, October 13, 2016, 4:43:31 PM, you wrote:
>> >> >
>> >> > > Hi Jan / Wei,
>> >> >
>> >> > > Took a while before i had the chance to fiddle some more to find the actual culprit.
>> >> > > After analyzing the output of xl -vvvvv create somewhat more i came to the
>> >> > > insight it was probably Qemu and not Xen causing the fault.
>> >> >
>> >> > > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
>> >> > > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
>> >> >
>> >> > > After testing i can conclude, Jan was right, the bisection was a red herring,
>> >> > > the problem is caused by some change in Qemu and not by something in the Xen tree.
>> >> > > (strange thing is that for as far as i know i did a "make distclean" between
>> >> > > every build (taking a lot of time), which should have pulled a fresh qemu-xen
>> >> > > tree and therefor the bisection should have lead to a commit with a Config.mk
>> >> > > hash change for qemu-xen version.)
>> >> >
>> >> > > Will see if i can find some more time and bisect qemu and find the culprit.
>> >> >
>> >> > > --
>> >> > > Sander
>> >> >
>> >> >
>> >> > Unfortunately i have to give up on this issue, for me it's impossible to bisect this
>> >> > issue with my present git-foo.
>> >> >
>> >> > The first try with bisection of the whole xen-tree seems to have hit the issue that the
>> >> > qemu-revision that gets pulled on a fresh build is "master" during the whole
>> >> > dev period. That creates havoc when trying to bisect, since you are testing
>> >> > combinations that were never developed (nor auto tested) in that combination
>> >> > (especially when a xen-tree and qemu-tree change have a dependency like Roger's
>> >> > "xen: fix usage of xc_domain_create in domain builder")
>> >> >
>> >> > While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and
>> >> > seabios on rel-1.8.2) it get stuck on issues with that tree.
>> >> > Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
>> >> > to git://xenbits.xen.org/qemu-xen.git),after that there seem to have
>> >> > been a lot of merges going back and forth and to me it seems a mess (but as i
>> >> > said it could also be a lack of git-foo). I tried by manual bisecting, removing
>> >> > and cloning trees again etc. but that doesn't suffice, it's all going no-where.
>> >> > (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't
>> >> > seem to be some random problem)
>> >> >
>> >>
>> >> Thanks for trying.
>> >>
>> >> > So perhaps some dev can at least verify that the issue is there (since 4.7.0)
>> >> > and put it on the "known broken" list of things.
>> >> >
>> >>
>> >> I will put this into the list of things I need to look at.
>> >>
>> 
>> > I investigated this a bit. The root cause is the memory accounting is
>> > wrong in QEMU. It would try to allocate more ram than allowed. I haven't
>> > tried to figure out exactly what is wrong, though.
>> 
>> That confirms what i was thinking in the end, but bisection the 
>> qemu-tree
>> changes between the xen-4.6.0 and xen-4.7.0 release proved to be 
>> pretty
>> difficult as i explained. So i you have a hunch as to in what code it 
>> should
>> reside debugging instead of bisecting would probably be better.
>> (so one of the questions is what changes in the memory accounting when 
>> you
>> supply the kernel from the host instead of the guest, since booting a 
>> kernel
>> with grub from within the guest doesn't give any memory accounting 
>> issues.)
>> 
>> Thanks for investigating !
> 
> I think I hunted down the offending function.
> 
> Mind trying this patch for me?

Hi Wei,

This seems to help :)

With a linux 4.8 kernel the HVM guest now boots fine with direct kernel 
boot !

But there seems to be a gotcha which i think is not in the Xen 
docs/wiki:
when trying a linux 4.3 kernel the guest still didn't boot and i got a:
"qemu: linux kernel too old to load a ram disk" in the qemu log.
I don't know what qemu regards as "old" in this case.

Another considiration: would it be worthwhile to add an OSStest for 
direct kernel boot ?
(under the assumption that the host kernel that gets build can also boot 
on HVM guest it's probably a very cheap test not requiring any 
additional builds.)

Thanks again !

--
Sander


> ---8<---
> From 3c7f8b55109959cf470deeee452f452f7c0ade51 Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Tue, 25 Oct 2016 15:45:04 +0100
> Subject: [PATCH] acpi: don't build acpi tables for xen guests
> 
> Xen's toolstack is in charge of building ACPI tables. Skip acpi table
> building if running on Xen.
> 
> This issue is discovered due to direct kernel boot on Xen doesn't boot
> anymore, because the new ACPI tables cause the guest to exceed its
> memory allocation limit.
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> 
> RFC because I'm not sure this is the best way to fix it.
> ---
>  hw/i386/acpi-build.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index a26a4bb..6ba5031 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -45,6 +45,7 @@
>  #include "sysemu/tpm_backend.h"
>  #include "hw/timer/mc146818rtc_regs.h"
>  #include "sysemu/numa.h"
> +#include "hw/xen/xen.h"
> 
>  /* Supported chipsets: */
>  #include "hw/acpi/piix4.h"
> @@ -2865,6 +2866,12 @@ void acpi_setup(void)
>          return;
>      }
> 
> +    if (xen_enabled()) {
> +        fprintf(stderr, "%s %d\n", __FILE__, __LINE__);
> +        ACPI_BUILD_DPRINTF("Xen enabled. Bailing out.\n");
> +        return;
> +    }
> +
>      build_state = g_malloc0(sizeof *build_state);
> 
>      acpi_set_pci_info();

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore.
  2016-10-25 17:25                           ` Sander Eikelenboom
@ 2016-10-25 17:26                             ` Wei Liu
  0 siblings, 0 replies; 24+ messages in thread
From: Wei Liu @ 2016-10-25 17:26 UTC (permalink / raw)
  To: Sander Eikelenboom; +Cc: Doug Goldstein, Wei Liu, Jan Beulich, xen-devel

On Tue, Oct 25, 2016 at 07:25:06PM +0200, Sander Eikelenboom wrote:
> On 2016-10-25 16:49, Wei Liu wrote:
> >On Tue, Oct 25, 2016 at 01:37:45PM +0200, Sander Eikelenboom wrote:
> >>
> >>Tuesday, October 25, 2016, 1:24:12 PM, you wrote:
> >>
> >>> On Tue, Oct 18, 2016 at 01:48:23PM +0100, Wei Liu wrote:
> >>>> On Mon, Oct 17, 2016 at 05:28:17PM +0200, Sander Eikelenboom wrote:
> >>>> > Thursday, October 13, 2016, 4:43:31 PM, you wrote:
> >>>> >
> >>>> > > Hi Jan / Wei,
> >>>> >
> >>>> > > Took a while before i had the chance to fiddle some more to find the actual culprit.
> >>>> > > After analyzing the output of xl -vvvvv create somewhat more i came to the
> >>>> > > insight it was probably Qemu and not Xen causing the fault.
> >>>> >
> >>>> > > As a test I just used a qemu-xen binary build with xen-4.6.0 booting up a guest with
> >>>> > > direct kernel boot mode on xen-unstable. And that old qemu binary works fine.
> >>>> >
> >>>> > > After testing i can conclude, Jan was right, the bisection was a red herring,
> >>>> > > the problem is caused by some change in Qemu and not by something in the Xen tree.
> >>>> > > (strange thing is that for as far as i know i did a "make distclean" between
> >>>> > > every build (taking a lot of time), which should have pulled a fresh qemu-xen
> >>>> > > tree and therefor the bisection should have lead to a commit with a Config.mk
> >>>> > > hash change for qemu-xen version.)
> >>>> >
> >>>> > > Will see if i can find some more time and bisect qemu and find the culprit.
> >>>> >
> >>>> > > --
> >>>> > > Sander
> >>>> >
> >>>> >
> >>>> > Unfortunately i have to give up on this issue, for me it's impossible to bisect this
> >>>> > issue with my present git-foo.
> >>>> >
> >>>> > The first try with bisection of the whole xen-tree seems to have hit the issue that the
> >>>> > qemu-revision that gets pulled on a fresh build is "master" during the whole
> >>>> > dev period. That creates havoc when trying to bisect, since you are testing
> >>>> > combinations that were never developed (nor auto tested) in that combination
> >>>> > (especially when a xen-tree and qemu-tree change have a dependency like Roger's
> >>>> > "xen: fix usage of xc_domain_create in domain builder")
> >>>> >
> >>>> > While trying to bisect only qemu (keeping xen itself on RELEASE-4.6.0 and
> >>>> > seabios on rel-1.8.2) it get stuck on issues with that tree.
> >>>> > Between 4.6.0 and 4.7.0 the qemu tree switched from git://xenbits.xen.org/qemu-upstream-4.6-testing.git
> >>>> > to git://xenbits.xen.org/qemu-xen.git),after that there seem to have
> >>>> > been a lot of merges going back and forth and to me it seems a mess (but as i
> >>>> > said it could also be a lack of git-foo). I tried by manual bisecting, removing
> >>>> > and cloning trees again etc. but that doesn't suffice, it's all going no-where.
> >>>> > (while the known good build (plain RELEASE-4.6.0) always works, so it doesn't
> >>>> > seem to be some random problem)
> >>>> >
> >>>>
> >>>> Thanks for trying.
> >>>>
> >>>> > So perhaps some dev can at least verify that the issue is there (since 4.7.0)
> >>>> > and put it on the "known broken" list of things.
> >>>> >
> >>>>
> >>>> I will put this into the list of things I need to look at.
> >>>>
> >>
> >>> I investigated this a bit. The root cause is the memory accounting is
> >>> wrong in QEMU. It would try to allocate more ram than allowed. I haven't
> >>> tried to figure out exactly what is wrong, though.
> >>
> >>That confirms what i was thinking in the end, but bisection the
> >>qemu-tree
> >>changes between the xen-4.6.0 and xen-4.7.0 release proved to be pretty
> >>difficult as i explained. So i you have a hunch as to in what code it
> >>should
> >>reside debugging instead of bisecting would probably be better.
> >>(so one of the questions is what changes in the memory accounting when
> >>you
> >>supply the kernel from the host instead of the guest, since booting a
> >>kernel
> >>with grub from within the guest doesn't give any memory accounting
> >>issues.)
> >>
> >>Thanks for investigating !
> >
> >I think I hunted down the offending function.
> >
> >Mind trying this patch for me?
> 
> Hi Wei,
> 
> This seems to help :)
> 
> With a linux 4.8 kernel the HVM guest now boots fine with direct kernel boot
> !
> 
> But there seems to be a gotcha which i think is not in the Xen docs/wiki:
> when trying a linux 4.3 kernel the guest still didn't boot and i got a:
> "qemu: linux kernel too old to load a ram disk" in the qemu log.
> I don't know what qemu regards as "old" in this case.
> 

QEMU checks for a  signature / version in kernel header or whatnot. I
can't tell why that specific number is chosen, though.

> Another considiration: would it be worthwhile to add an OSStest for direct
> kernel boot ?
> (under the assumption that the host kernel that gets build can also boot on
> HVM guest it's probably a very cheap test not requiring any additional
> builds.)

Yes, definitely. The more tests, the merrier.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-10-25 17:26 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-25 20:21 Regression between Xen 4.6.0 and 4.7.0, Direct kernel boot on a qemu-xen and seabios HVM guest doesn't work anymore linux
2016-08-25 20:34 ` Doug Goldstein
2016-08-25 21:18   ` linux
2016-08-26 10:19     ` Håkon Alstadheim
2016-08-30 12:35       ` Wei Liu
2016-08-30 22:13         ` Håkon Alstadheim
2016-09-05  9:20     ` linux
2016-09-05  9:25       ` Wei Liu
2016-09-05  9:46       ` Jan Beulich
2016-09-05 10:02         ` linux
2016-09-05 10:25           ` Jan Beulich
2016-09-05 11:19             ` linux
2016-09-05 11:43               ` Jan Beulich
2016-09-05 12:00                 ` linux
2016-10-13 14:43               ` Sander Eikelenboom
2016-10-17 15:28                 ` Sander Eikelenboom
2016-10-18 12:48                   ` Wei Liu
2016-10-18 21:32                     ` Håkon Alstadheim
2016-10-25 11:24                     ` Wei Liu
2016-10-25 11:37                       ` Sander Eikelenboom
2016-10-25 14:49                         ` Wei Liu
2016-10-25 15:00                           ` Wei Liu
2016-10-25 17:25                           ` Sander Eikelenboom
2016-10-25 17:26                             ` Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.