From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, kevin.tian@intel.com,
jun.nakajima@intel.com, JBeulich@suse.com,
andrew.cooper3@citrix.com
Cc: wim.coekaerts@oracle.com
Subject: Nested virtualization off VMware vSphere 6.0 with EL6 guests crashes on Xen 4.6
Date: Mon, 11 Jan 2016 22:38:45 -0500 [thread overview]
Message-ID: <20160112033844.GB15551@char.us.oracle.com> (raw)
Hey,
The machine is an X5-2 which is a Haswell based E5-2699 v3.
We are trying to launch to use the nested virtualization. The
guest is a simple VMware vSphere 6.0 with 32GB, 8 CPUs.
The guest than that is launched within VMware is a 2 VCPU 2GB Linux
(OEL6 to be exact). During its bootup Xen crashes with this assert.
Oddly enough if this is repeated on a workstation Ivy Bridge CPU (i5-3570)
it works fine.
Disabling APICv (apicv=0) on the Xen command line did not help.
I added some debug code to see if the vapic_pg is bad and what
the p2mt type is [read below]
Serial console started. To stop, type ESC (
(XEN) Assertion 'vapic_pg && !p2m_is_paging(p2mt)' failed at vvmx.c:698
(XEN) ----[ Xen-4.6.0 x86_64 debug=y Tainted: C ]----
(XEN) CPU: 39
(XEN) RIP: e008:[<ffff82d0801ed053>] virtual_vmentry+0x487/0xac9
(XEN) RFLAGS: 0000000000010246 CONTEXT: hypervisor (d1v3)
(XEN) rax: 0000000000000000 rbx: ffff83007786c000 rcx: 0000000000000000
(XEN) rdx: 0000000000000e00 rsi: 000fffffffffffff rdi: ffff83407f81e010
(XEN) rbp: ffff834008a47ea8 rsp: ffff834008a47e38 r8: 0000000000000000
(XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000
(XEN) r12: 0000000000000000 r13: ffff82c000341000 r14: ffff834008a47f18
(XEN) r15: ffff83407f7c4000 cr0: 0000000080050033 cr4: 00000000001526e0
(XEN) cr3: 000000407fb22000 cr2: 0000000000000000
(XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008
(XEN) Xen stack trace from rsp=ffff834008a47e38:
(XEN) ffff834008a47e68 ffff82d0801d2cde ffff834008a47e68 0000000000000d00
(XEN) 0000000000000000 0000000000000000 ffff834008a47e88 00000004801cc30e
(XEN) ffff83007786c000 ffff83007786c000 ffff834008a40000 0000000000000000
(XEN) ffff834008a47f18 0000000000000000 ffff834008a47f08 ffff82d0801edf94
(XEN) ffff834008a47ef8 0000000000000000 ffff834008f62000 ffff834008a47f18
(XEN) 000000ae8c99eb8d ffff83007786c000 0000000000000000 0000000000000000
(XEN) 0000000000000000 0000000000000000 0000000000000000 ffff82d0801ee2ab
(XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) 00000000078bfbff 0000000000000000 0000000000000000 0000beef0000beef
(XEN) fffffffffc4b3440 000000bf0000beef 0000000000040046 fffffffffc607f00
(XEN) 000000000000beef 000000000000beef 000000000000beef 000000000000beef
(XEN) 000000000000beef 0000000000000027 ffff83007786c000 0000006f88716300
(XEN) 0000000000000000
(XEN) Xen call trace:
(XEN) [<ffff82d0801ed053>] virtual_vmentry+0x487/0xac9
(XEN) [<ffff82d0801edf94>] nvmx_switch_guest+0x8ff/0x915
(XEN) [<ffff82d0801ee2ab>] vmx_asm_vmexit_handler+0x4b/0xc0
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 39:
(XEN) Assertion 'vapic_pg && !p2m_is_paging(p2mt)' failed at vvmx.c:698
(XEN) ****************************************
(XEN)
..and then to my surprise the hypervisor stopped hitting this. Instead
I started getting an even more bizzare crash:
(d1) enter handle_19:
(d1) NULL
(d1) Booting from Hard Disk...
(d1) Booting from 0000:7c00
(XEN) stdvga.c:151:d1v0 leaving stdvga mode
(XEN) stdvga.c:147:d1v0 entering stdvga and caching modes
(XEN) stdvga.c:520:d1v0 leaving caching mode
(XEN) ----[ Xen-4.6.0 x86_64 debug=y Tainted: C ]----
(XEN) CPU: 3
(XEN) RIP: e008:[<ffff82d0801e3dc7>] vmx_cpu_up+0xacc/0xba5
(XEN) RFLAGS: 0000000000010242 CONTEXT: hypervisor (d1v1)
(XEN) rax: 0000000000000000 rbx: ffff830077877000 rcx: ffff834077e54000
(XEN) rdx: ffff834007dc8000 rsi: 0000000000002000 rdi: ffff830077877000
(XEN) rbp: ffff834007dcfc48 rsp: ffff834007dcfc38 r8: 0000000004040000
(XEN) r9: 000ffffffffff000 r10: 0000000000000000 r11: fffffffffc423f1e
(XEN) r12: 0000000000002000 r13: 0000000000000000 r14: 0000000000000000
(XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000001526e0
(XEN) cr3: 0000004000763000 cr2: 0000000000000000
(XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008
(XEN) Xen stack trace from rsp=ffff834007dcfc38:
(XEN) ffff834007dcfc98 0000000000000000 ffff834007dcfc68 ffff82d0801e2533
(XEN) ffff830077877000 0000000000002000 ffff834007dcfc78 ffff82d0801ea933
(XEN) ffff834007dcfca8 ffff82d0801eaae4 0000000000000000 ffff830077877000
(XEN) 0000000000000000 ffff834007dcff18 ffff834007dcfd08 ffff82d0801eb983
(XEN) ffff834000000001 000000013692c000 ffff834000000000 fffffffffc607f28
(XEN) 0000000000000008 ffff834000000006 ffff834007dcff18 ffff830077877000
(XEN) 0000000000000015 0000000000000000 ffff834007dcff08 ffff82d0801e8c8d
(XEN) ffff834007763000 ffff8300778c2000 ffff8340007c3000 ffff834007dcfd50
(XEN) ffff82d0801e120b ffff834007dcfd50 ffff830077877000 ffff834007dcfdf0
(XEN) 0000000000000000 0000000000000000 ffff82d08012fe0b ffff834007dfcac0
(XEN) ffff834007dd30e8 0000000000000086 ffff834007dcfda0 ffff82d08012d4c2
(XEN) ffff834000000003 0000000000000008 0000000000000000 0000000000000000
(XEN) 0000000000000000 ffff834007dcfdf0 ffff8300778c2000 ffff830077877000
(XEN) ffff834007dd30c8 00000083aa72fdd8 0000000000000001 ffff834007dcfe90
(XEN) 0000000000000286 ffff834007dcfe18 ffff82d08012d4c2 ffff830077877000
(XEN) ffff834007dcfe88 ffff82d0801d67b2 92e004e300000002 ffff830077877560
(XEN) ffff834007dcfe68 ffff82d0801d2cbe ffff834007dcfe68 ffff830077877000
(XEN) ffff8340007c3000 0000439115b27100 ffff834007dcfe88 ffff82d0801cc2ee
(XEN) ffff830077877000 0000000000000100 ffff834007dcff08 ffff82d0801dfd2a
(XEN) ffff834007dcff18 ffff830077877000 ffff834007dcff08 ffff82d0801e6f09
(XEN) Xen call trace:
(XEN) [<ffff82d0801e3dc7>] vmx_cpu_up+0xacc/0xba5
(XEN) [<ffff82d0801e2533>] virtual_vmcs_vmread+0x1c/0x3f
(XEN) [<ffff82d0801ea933>] get_vvmcs_real+0x9/0xb
(XEN) [<ffff82d0801eaae4>] _map_io_bitmap+0x5a/0x9f
(XEN) [<ffff82d0801eb983>] nvmx_handle_vmptrld+0xd5/0x201
(XEN) [<ffff82d0801e8c8d>] vmx_vmexit_handler+0x1253/0x19d4
(XEN) [<ffff82d0801ee261>] vmx_asm_vmexit_handler+0x41/0xc0
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 3:
(XEN) FATAL TRAP: vector = 6 (invalid opcode)
(XEN) ****************************************
(XEN)
(XEN) Manual reset required ('noreboot' specified)
With the stack and gdb and following it I see:
(gdb) x/20i virtual_vmcs_vmread
0xffff82d0801e2517 <virtual_vmcs_vmread>: push %rbp
0xffff82d0801e2518 <virtual_vmcs_vmread+1>: mov %rsp,%rbp
0xffff82d0801e251b <virtual_vmcs_vmread+4>: sub $0x10,%rsp
0xffff82d0801e251f <virtual_vmcs_vmread+8>: mov %rbx,(%rsp)
0xffff82d0801e2523 <virtual_vmcs_vmread+12>: mov %r12,0x8(%rsp)
0xffff82d0801e2528 <virtual_vmcs_vmread+17>: mov %rdi,%rbx
0xffff82d0801e252b <virtual_vmcs_vmread+20>: mov %esi,%r12d
0xffff82d0801e252e <virtual_vmcs_vmread+23>: callq 0xffff82d0801e03f9 <virtual_vmcs_enter>
0xffff82d0801e2533 <virtual_vmcs_vmread+28>: mov %r12d,%r12d
0xffff82d0801e2536 <virtual_vmcs_vmread+31>: vmread %r12,%r12
0xffff82d0801e253a <virtual_vmcs_vmread+35>: jbe 0xffff82d0801e3df3
0xffff82d0801e2540 <virtual_vmcs_vmread+41>: mov %rbx,%rdi
0xffff82d0801e2543 <virtual_vmcs_vmread+44>: callq 0xffff82d0801e23f2 <virtual_vmcs_exit>
0xffff82d0801e2548 <virtual_vmcs_vmread+49>: mov %r12,%rax
0xffff82d0801e254b <virtual_vmcs_vmread+52>: mov (%rsp),%rbx
0xffff82d0801e254f <virtual_vmcs_vmread+56>: mov 0x8(%rsp),%r12
0xffff82d0801e2554 <virtual_vmcs_vmread+61>: leaveq
0xffff82d0801e2555 <virtual_vmcs_vmread+62>: retq
0xffff82d0801e2556 <vmx_create_vmcs>: push %rbp
0xffff82d0801e2557 <vmx_create_vmcs+1>: mov %rsp,%rbp
(gdb)
(gdb) x/20i 0xffff82d0801e03f9
0xffff82d0801e03f9 <virtual_vmcs_enter>: push %rbp
0xffff82d0801e03fa <virtual_vmcs_enter+1>: mov %rsp,%rbp
0xffff82d0801e03fd <virtual_vmcs_enter+4>: sub $0x10,%rsp
0xffff82d0801e0401 <virtual_vmcs_enter+8>: mov 0x5c8(%rdi),%rax
0xffff82d0801e0408 <virtual_vmcs_enter+15>: mov %rax,-0x8(%rbp)
0xffff82d0801e040c <virtual_vmcs_enter+19>: vmptrld -0x8(%rbp)
0xffff82d0801e0410 <virtual_vmcs_enter+23>: jbe 0xffff82d0801e3dc7
0xffff82d0801e0416 <virtual_vmcs_enter+29>: leaveq
0xffff82d0801e0417 <virtual_vmcs_enter+30>: retq
(gdb) x/20i 0xffff82d0801e3dc7
0xffff82d0801e3dc7: ud2a
0xffff82d0801e3dc9: ud2a
static inline void __vmptrld(u64 addr)
{
asm volatile (
#ifdef HAVE_GAS_VMX
"vmptrld %0\n"
#else
VMPTRLD_OPCODE MODRM_EAX_06
#endif
/* CF==1 or ZF==1 --> crash (ud2) */
UNLIKELY_START(be, vmptrld)
"\tud2\n"
UNLIKELY_END_SECTION
:
#ifdef HAVE_GAS_VMX
: "m" (addr)
#else
: "a" (&addr)
#endif
: "memory");
}
Thoughts?
The guest config is quite simple:
hap=1
nestedhvm=1
cpuid = ['0x1:ecx=0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx']
disk = [ 'file:/home/xen/esx2/esx2.img,hda,w','phy:/dev/mapper/vg_caex01db04-esx2,hdb,w']
memory=32000
vcpus=8
name="esx2"
vif = [ 'type=ioemu,bridge=virbr0,model=vmxnet3','type=ioemu,bridge=intbr0,model=vmxnet3' ]
builder = "hvm"
device_model = "/usr/lib/xen/bin/qemu-dm"
vnc=1
vncunused=1
vnclisten="10.68.50.68"
apic=1
acpi=1
pae=1
serial = "pty" # enable serial console
on_reboot = 'restart'
on_crash = 'restart'
The cpuid is borrowed from:
http://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen
next reply other threads:[~2016-01-12 3:38 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-12 3:38 Konrad Rzeszutek Wilk [this message]
2016-01-12 9:22 ` Nested virtualization off VMware vSphere 6.0 with EL6 guests crashes on Xen 4.6 Jan Beulich
2016-01-15 21:39 ` Konrad Rzeszutek Wilk
2016-01-18 9:41 ` Jan Beulich
2016-02-02 22:05 ` Konrad Rzeszutek Wilk
2016-02-03 9:34 ` Jan Beulich
2016-02-03 15:07 ` Konrad Rzeszutek Wilk
2016-02-04 18:36 ` Konrad Rzeszutek Wilk
2016-02-05 10:33 ` Jan Beulich
2016-11-03 1:41 ` Konrad Rzeszutek Wilk
2016-11-03 14:36 ` Konrad Rzeszutek Wilk
2016-02-04 5:52 ` Tian, Kevin
2016-02-17 2:54 ` Tian, Kevin
2016-01-12 14:18 ` Alvin Starr
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160112033844.GB15551@char.us.oracle.com \
--to=konrad.wilk@oracle.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=wim.coekaerts@oracle.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.