From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: Error booting Xen Date: Wed, 27 Jan 2016 06:12:14 -0700 Message-ID: <56A8D03E02000078000CB8B8@prv-mh.provo.novell.com> References: <569D2A49.1090409@citrix.com> <1453141749.11427.92.camel@citrix.com> <569E4A0102000078000C8917@prv-mh.provo.novell.com> <1453215095.11427.144.camel@citrix.com> <569E5118.4030704@citrix.com> <569E76B102000078000C8C14@prv-mh.provo.novell.com> <569E6C36.3070402@citrix.com> <1453223282.11427.149.camel@citrix.com> <569F6A2102000078000C8F20@prv-mh.provo.novell.com> <1453389242.3116.106.camel@citrix.com> <56A6344C02000078000CAA56@prv-mh.provo.novell.com> <56A7815502000078000CB147@prv-mh.provo.novell.com> <56A7B98C02000078000CB3B8@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="=__Part6A5D993E.3__=" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Harmandeep Kaur Cc: Andrew Cooper , Dario Faggioli , xen-devel@lists.xen.org, Shuai Ruan List-Id: xen-devel@lists.xenproject.org This is a MIME message. If you are reading this text, you may want to consider changing to a mail reader or gateway that understands how to properly handle MIME multipart messages. --=__Part6A5D993E.3__= Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Content-Disposition: inline >>> On 26.01.16 at 19:02, wrote: > Last time, I did absolutely nothing. System was idle > and it crashed just after the login. Now, I booted the > system again and this time, there is no reset. But, > performance of the system is very slow. Browser > (Mozilla Firefox) freezes a lot. Also, before applying > patches, when I used to disabe xsave it resulted in > same kind of performance issues. And the following > is still present in the log. >=20 > (XEN) traps.c:3290: GPF (0000): ffff82d0801c1cea -> ffff82d080252e5c > (XEN) d1v1 fault#1: mxcsr=3D00001f80 > (XEN) d1v1 xs=3D0000000000000003 xc=3D8000000000000000 > (XEN) d1v1 r0=3D0000000000000000 r1=3D0000000000000000 > (XEN) d1v1 r2=3D0000000000000000 r3=3D0000000000000000 > (XEN) d1v1 r4=3D0000000000000000 r5=3D0000000000000000 > (XEN) traps.c:3290: GPF (0000): ffff82d0801c1cea -> ffff82d080252e5c > (XEN) d1v1 fault#2: mxcsr=3D00001f80 > (XEN) d1v1 xs=3D0000000000000000 xc=3D0000000000000000 > (XEN) d1v1 r0=3D0000000000000000 r1=3D0000000000000000 > (XEN) d1v1 r2=3D0000000000000000 r3=3D0000000000000000 > (XEN) d1v1 r4=3D0000000000000000 r5=3D0000000000000000 >=20 > Full log here: http://paste2.org/C8WpyKOg=20 This together with ... > On Tue, Jan 26, 2016 at 10:53 PM, Jan Beulich wrote: >>>>> On 26.01.16 at 18:01, wrote: >>> I tried 3rd patch together with earlier two. I'm >>> afraid the problem is not solved completely. >>> Full log goes here, http://paste2.org/KEAetMHb=20 ... this, and both being apparently the same build makes me suspect uninitialized data to get passed in from the tool stack. But that's a secondary issue for now. For the immediate problem here are four patches replacing the three earlier ones (I think only one of them is unchanged, so be sure to remove the old ones first). Their intended ordering is: x86-xsaves-init.patch x86-xstate-align.patch x86-xrstors-fault.patch x86-xstate-validate.patch Jan --=__Part6A5D993E.3__= Content-Type: text/plain; name="x86-xrstors-fault.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86-xrstors-fault.patch" x86/xstate: fix fault behavior on XRSTORS=0A=0AXRSTORS unconditionally = faults when xcomp_bv has bit 63 clear. Instead=0Aof just fixing this = issue, overhaul the fault recovery code, which -=0Aone of the many = mistakes made when xstate support got introduced - was=0Ablindly mirroring = that accompanying FXRSTOR, neglecting the fact that=0AXRSTOR{,S} aren't = all-or-nothing instructions. The new code, first of=0Aall, does all the = recovery actions in C, simplifying the inline=0Aassembly used. And it does = its work in a multi-stage fashion: Upon=0Afirst seeing a fault, state = fixups get applied strictly based on what=0Aarchitecturally may cause #GP. = When seeing another fault despite the=0Afixups done, state gets fully = reset. A third fault would then lead to=0Acrashing the domain (instead of = hanging the hypervisor in an infinite=0Aloop of recurring faults).=0A=0ARep= orted-by: Harmandeep Kaur =0ASigned-off-by: = Jan Beulich =0A=0A--- unstable.orig/xen/arch/x86/xstate.= c 2016-01-25 09:35:12.000000000 +0100=0A+++ unstable/xen/arch/x86/xst= ate.c 2016-01-27 10:23:06.000000000 +0100=0A@@ -29,6 +29,8 @@ unsigned = int *__read_mostly xstate_sizes=0A static unsigned int __read_mostly = xstate_features;=0A static unsigned int __read_mostly xstate_comp_offsets[s= izeof(xfeature_mask)*8];=0A =0A+static uint32_t __read_mostly mxcsr_mask = =3D MXCSR_DEFAULT;=0A+=0A /* Cached xcr0 for fast read */=0A static = DEFINE_PER_CPU(uint64_t, xcr0);=0A =0A@@ -342,6 +344,7 @@ void xrstor(struc= t vcpu *v, uint64_t mas=0A uint32_t hmask =3D mask >> 32;=0A = uint32_t lmask =3D mask;=0A struct xsave_struct *ptr =3D v->arch.xsave_= area;=0A+ unsigned int faults, prev_faults;=0A =0A /*=0A * AMD = CPUs don't save/restore FDP/FIP/FOP unless an exception=0A@@ -361,35 = +364,85 @@ void xrstor(struct vcpu *v, uint64_t mas=0A /*=0A * = XRSTOR can fault if passed a corrupted data block. We handle this=0A = * possibility, which may occur if the block was passed to us by control=0A-= * tools or through VCPUOP_initialise, by silently clearing the = block.=0A+ * tools or through VCPUOP_initialise, by silently adjusting = state.=0A */=0A- switch ( __builtin_expect(ptr->fpu_sse.x[FPU_WORD_= SIZE_OFFSET], 8) )=0A+ for ( prev_faults =3D faults =3D 0; ; prev_faults= =3D faults )=0A {=0A+ switch ( __builtin_expect(ptr->fpu_sse.x[= FPU_WORD_SIZE_OFFSET], 8) )=0A+ {=0A #define XRSTOR(pfx) \=0A = alternative_io("1: .byte " pfx "0x0f,0xae,0x2f\n" \=0A+ = "3:\n" \=0A " .section .fixup,\"ax\"\n" = \=0A- "2: mov %[size],%%ecx\n" \=0A- = " xor %[lmask_out],%[lmask_out]\n" \=0A- " = rep stosb\n" \=0A- " lea %[mem],%[ptr]\n" \=0A- = " mov %[lmask_in],%[lmask_out]\n" \=0A- = " jmp 1b\n" \=0A+ "2: inc%z[faults] = %[faults]\n" \=0A+ " jmp 3b\n" \=0A = " .previous\n" \=0A _ASM_EXTABLE(1b, 2b), = \=0A ".byte " pfx "0x0f,0xc7,0x1f\n", \=0A = X86_FEATURE_XSAVES, \=0A- ASM_OUTPUT2([= ptr] "+&D" (ptr), [lmask_out] "+&a" (lmask)), \=0A- = [mem] "m" (*ptr), [lmask_in] "g" (lmask), \=0A- = [hmask] "d" (hmask), [size] "m" (xsave_cntxt_size) \=0A- = : "ecx")=0A-=0A- default:=0A- XRSTOR("0x48,");=0A- = break;=0A- case 4: case 2:=0A- XRSTOR("");=0A- break;=0A+ = ASM_OUTPUT2([mem] "+m" (*ptr), [faults] "+g" = (faults)), \=0A+ [lmask] "a" (lmask), [hmask] "d" = (hmask), \=0A+ [ptr] "D" (ptr))=0A+=0A+ = default:=0A+ XRSTOR("0x48,");=0A+ break;=0A+ = case 4: case 2:=0A+ XRSTOR("");=0A+ break;=0A #undef = XRSTOR=0A+ }=0A+ if ( likely(faults =3D=3D prev_faults) = )=0A+ break;=0A+#ifndef NDEBUG=0A+ gprintk(XENLOG_WARNING= , "fault#%u: mxcsr=3D%08x\n",=0A+ faults, ptr->fpu_sse.mxcsr= );=0A+ gprintk(XENLOG_WARNING, "xs=3D%016lx xc=3D%016lx\n",=0A+ = ptr->xsave_hdr.xstate_bv, ptr->xsave_hdr.xcomp_bv);=0A+ = gprintk(XENLOG_WARNING, "r0=3D%016lx r1=3D%016lx\n",=0A+ = ptr->xsave_hdr.reserved[0], ptr->xsave_hdr.reserved[1]);=0A+ = gprintk(XENLOG_WARNING, "r2=3D%016lx r3=3D%016lx\n",=0A+ = ptr->xsave_hdr.reserved[2], ptr->xsave_hdr.reserved[3]);=0A+ = gprintk(XENLOG_WARNING, "r4=3D%016lx r5=3D%016lx\n",=0A+ = ptr->xsave_hdr.reserved[4], ptr->xsave_hdr.reserved[5]);=0A+#endif=0A+ = switch ( faults )=0A+ {=0A+ case 1:=0A+ /* = Stage 1: Reset state to be loaded. */=0A+ ptr->xsave_hdr.xstate_= bv &=3D ~mask;=0A+ /*=0A+ * Also try to eliminate = fault reasons, even if this shouldn't be=0A+ * needed here = (other code should ensure the sanity of the data).=0A+ */=0A+ = if ( ((mask & XSTATE_SSE) ||=0A+ ((mask & = XSTATE_YMM) &&=0A+ !(ptr->xsave_hdr.xcomp_bv & XSTATE_COM= PACTION_ENABLED))) )=0A+ ptr->fpu_sse.mxcsr &=3D mxcsr_mask;= =0A+ if ( cpu_has_xsaves || cpu_has_xsavec )=0A+ = {=0A+ ptr->xsave_hdr.xcomp_bv &=3D this_cpu(xcr0) | = this_cpu(xss);=0A+ ptr->xsave_hdr.xstate_bv &=3D ptr->xsave_= hdr.xcomp_bv;=0A+ ptr->xsave_hdr.xcomp_bv |=3D XSTATE_COMPAC= TION_ENABLED;=0A+ }=0A+ else=0A+ {=0A+ = ptr->xsave_hdr.xstate_bv &=3D this_cpu(xcr0);=0A+ = ptr->xsave_hdr.xcomp_bv =3D 0;=0A+ }=0A+ memset(ptr= ->xsave_hdr.reserved, 0, sizeof(ptr->xsave_hdr.reserved));=0A+ = continue;=0A+ case 2:=0A+ /* Stage 2: Reset all state. = */=0A+ ptr->fpu_sse.mxcsr =3D MXCSR_DEFAULT;=0A+ = ptr->xsave_hdr.xstate_bv =3D 0;=0A+ ptr->xsave_hdr.xcomp_bv =3D = cpu_has_xsaves=0A+ ? XSTATE_COMPACTION= _ENABLED : 0;=0A+ continue;=0A+ default:=0A+ = domain_crash(current->domain);=0A+ break;=0A+ }=0A = }=0A }=0A =0A@@ -496,6 +549,8 @@ void xstate_init(struct cpuinfo_x86 = *c)=0A =0A if ( bsp )=0A {=0A+ static typeof(current->arch.x= save_area->fpu_sse) __initdata ctxt;=0A+=0A xfeature_mask =3D = feature_mask;=0A /*=0A * xsave_cntxt_size is the max size = required by enabled features.=0A@@ -504,6 +559,10 @@ void xstate_init(struc= t cpuinfo_x86 *c)=0A xsave_cntxt_size =3D _xstate_ctxt_size(feature= _mask);=0A printk("%s: using cntxt_size: %#x and states: %#"PRIx64"= \n",=0A __func__, xsave_cntxt_size, xfeature_mask);=0A+=0A+ = asm ( "fxsave %0" : "=3Dm" (ctxt) );=0A+ if ( ctxt.mxcsr_mask = )=0A+ mxcsr_mask =3D ctxt.mxcsr_mask;=0A }=0A else=0A = {=0A --=__Part6A5D993E.3__= Content-Type: text/plain; name="x86-xsaves-init.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86-xsaves-init.patch" x86/xstate: fix xcomp_bv initialization=0A=0AWe must not clear the = compaction bit when using XSAVES/XRSTORS. And=0Awe need to guarantee that = xcomp_bv never has any bits clear which=0Aare set in xstate_bv (which = requires partly undoing commit 83ae0bb226=0A["x86/xsave: simplify xcomp_bv = initialization"]). Split initialization=0Aof xcomp_bv from the other = FPU/SSE/AVX related state setup in=0Aarch_set_info_guest() and hvm_load_cpu= _ctxt().=0A=0AReported-by: Harmandeep Kaur =0AS= igned-off-by: Jan Beulich =0A=0A--- unstable.orig/xen/ar= ch/x86/domain.c 2016-01-27 09:29:50.000000000 +0100=0A+++ unstable/xen/arch= /x86/domain.c 2016-01-27 09:52:37.000000000 +0100=0A@@ -922,15 +922,10 = @@ int arch_set_info_guest(=0A {=0A memcpy(v->arch.fpu_ctxt, = &c.nat->fpu_ctxt, sizeof(c.nat->fpu_ctxt));=0A if ( v->arch.xsave_a= rea )=0A- {=0A v->arch.xsave_area->xsave_hdr.xstate_bv = =3D XSTATE_FP_SSE;=0A- v->arch.xsave_area->xsave_hdr.xcomp_bv = =3D=0A- cpu_has_xsaves ? XSTATE_COMPACTION_ENABLED : 0;=0A- = }=0A }=0A else if ( v->arch.xsave_area )=0A- = memset(&v->arch.xsave_area->xsave_hdr, 0,=0A- sizeof(v->arch.= xsave_area->xsave_hdr));=0A+ v->arch.xsave_area->xsave_hdr.xstate_bv= =3D 0;=0A else=0A {=0A typeof(v->arch.xsave_area->fpu_sse)= *fpu_sse =3D v->arch.fpu_ctxt;=0A@@ -939,6 +934,14 @@ int arch_set_info_gu= est(=0A fpu_sse->fcw =3D FCW_DEFAULT;=0A fpu_sse->mxcsr = =3D MXCSR_DEFAULT;=0A }=0A+ if ( cpu_has_xsaves )=0A+ {=0A+ = ASSERT(v->arch.xsave_area);=0A+ v->arch.xsave_area->xsave_hdr.xcom= p_bv =3D XSTATE_COMPACTION_ENABLED |=0A+ v->arch.xsave_area->xsa= ve_hdr.xstate_bv;=0A+ }=0A+ else if ( v->arch.xsave_area )=0A+ = v->arch.xsave_area->xsave_hdr.xcomp_bv =3D 0;=0A =0A if ( !compat = )=0A {=0A--- unstable.orig/xen/arch/x86/hvm/hvm.c 2015-12-18 = 12:22:20.000000000 +0100=0A+++ unstable/xen/arch/x86/hvm/hvm.c 2016-01-27 = 09:52:26.000000000 +0100=0A@@ -2094,11 +2094,17 @@ static int hvm_load_cpu_= ctxt(struct doma=0A =0A memcpy(v->arch.xsave_area, ctxt.fpu_regs, = sizeof(ctxt.fpu_regs));=0A xsave_area->xsave_hdr.xstate_bv =3D = XSTATE_FP_SSE;=0A- xsave_area->xsave_hdr.xcomp_bv =3D=0A- = cpu_has_xsaves ? XSTATE_COMPACTION_ENABLED : 0;=0A }=0A else=0A = memcpy(v->arch.fpu_ctxt, ctxt.fpu_regs, sizeof(ctxt.fpu_regs));=0A+ = if ( cpu_has_xsaves )=0A+ {=0A+ ASSERT(v->arch.xsave_area);=0A+= v->arch.xsave_area->xsave_hdr.xcomp_bv =3D XSTATE_COMPACTION_ENABLE= D |=0A+ v->arch.xsave_area->xsave_hdr.xstate_bv;=0A+ }=0A+ = else if ( v->arch.xsave_area )=0A+ v->arch.xsave_area->xsave_hdr.xc= omp_bv =3D 0;=0A =0A v->arch.user_regs.eax =3D ctxt.rax;=0A = v->arch.user_regs.ebx =3D ctxt.rbx;=0A@@ -5488,8 +5494,8 @@ void hvm_vcpu_r= eset_state(struct vcpu *v=0A if ( v->arch.xsave_area )=0A {=0A = v->arch.xsave_area->xsave_hdr.xstate_bv =3D XSTATE_FP;=0A- = v->arch.xsave_area->xsave_hdr.xcomp_bv =3D=0A- cpu_has_xsaves ? = XSTATE_COMPACTION_ENABLED : 0;=0A+ v->arch.xsave_area->xsave_hdr.xco= mp_bv =3D cpu_has_xsaves=0A+ ? XSTATE_COMPACTION_ENABLED | = XSTATE_FP : 0;=0A }=0A =0A v->arch.vgc_flags =3D VGCF_online;=0A --=__Part6A5D993E.3__= Content-Type: text/plain; name="x86-xstate-align.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86-xstate-align.patch" x86: adjust xsave structure attributes=0A=0AThe packed attribute was = pointlessly used here - there are no=0Amisaligned fields, and hence even = if the attribute took effect, it=0Awould at best lead to the compiler = generating worse code.=0A=0AAt the same time specify the required = alignment of the fpu_sse sub-=0Astructure, such that the various typeof() = uses on that field obtain=0Apointers to properly aligned memory (knowledge = which a compiler may=0Awant to make use of).=0A=0AAlso add suitable = build-time checks.=0A=0ASigned-off-by: Jan Beulich =0A= =0A--- unstable.orig/xen/arch/x86/i387.c 2016-01-25 11:30:11.0000000= 00 +0100=0A+++ unstable/xen/arch/x86/i387.c 2016-01-25 09:35:36.0000000= 00 +0100=0A@@ -277,7 +277,9 @@ int vcpu_init_fpu(struct vcpu *v)=0A = }=0A else=0A {=0A- v->arch.fpu_ctxt =3D _xzalloc(sizeof(v->a= rch.xsave_area->fpu_sse), 16);=0A+ BUILD_BUG_ON(__alignof(v->arch.xs= ave_area->fpu_sse) < 16);=0A+ v->arch.fpu_ctxt =3D _xzalloc(sizeof(v= ->arch.xsave_area->fpu_sse),=0A+ = __alignof(v->arch.xsave_area->fpu_sse));=0A if ( v->arch.fpu_ctxt = )=0A {=0A typeof(v->arch.xsave_area->fpu_sse) *fpu_sse = =3D v->arch.fpu_ctxt;=0A--- unstable.orig/xen/arch/x86/xstate.c 2016-01-25 = 11:30:11.000000000 +0100=0A+++ unstable/xen/arch/x86/xstate.c 2016-01-25 = 09:35:12.000000000 +0100=0A@@ -414,7 +414,8 @@ int xstate_alloc_save_area(s= truct vcpu *=0A BUG_ON(xsave_cntxt_size < XSTATE_AREA_MIN_SIZE);=0A = =0A /* XSAVE/XRSTOR requires the save area be 64-byte-boundary = aligned. */=0A- save_area =3D _xzalloc(xsave_cntxt_size, 64);=0A+ = BUILD_BUG_ON(__alignof(*save_area) < 64);=0A+ save_area =3D _xzalloc(xsa= ve_cntxt_size, __alignof(*save_area));=0A if ( save_area =3D=3D NULL = )=0A return -ENOMEM;=0A =0A--- unstable.orig/xen/include/asm-x86/xs= tate.h 2016-01-25 11:30:11.000000000 +0100=0A+++ unstable/xen/include/asm-= x86/xstate.h 2016-01-25 11:33:20.000000000 +0100=0A@@ -48,9 +48,9 @@ = extern u64 xfeature_mask;=0A extern unsigned int *xstate_sizes;=0A =0A /* = extended state save area */=0A-struct __packed __attribute__((aligned = (64))) xsave_struct=0A+struct __attribute__((aligned (64))) xsave_struct=0A= {=0A- union { /* FPU/MMX, SSE */=0A+ = union __attribute__((aligned(16))) { /* FPU/MMX, SSE */=0A = char x[512];=0A struct {=0A uint16_t fcw;=0A --=__Part6A5D993E.3__= Content-Type: text/plain; name="x86-xstate-validate.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86-xstate-validate.patch" x86/xstate: extend validation to cover full header=0A=0ASince we never = hand out compacted state, at least for now we're also=0Anot going to = accept such.=0A=0AReported-by: Harmandeep Kaur = =0ASigned-off-by: Jan Beulich =0A=0A--- unstable.orig/xe= n/arch/x86/domctl.c 2016-01-27 10:54:16.000000000 +0100=0A+++ = unstable/xen/arch/x86/domctl.c 2016-01-27 10:44:52.000000000 +0100=0A@@ = -958,7 +958,7 @@ long arch_do_domctl(=0A {=0A = if ( evc->size >=3D 2 * sizeof(uint64_t) + XSTATE_AREA_MIN_SIZE )=0A = ret =3D validate_xstate(_xcr0, _xcr0_accum,=0A- = _xsave_area->xsave_hdr.xstate_bv);=0A+ = &_xsave_area->xsave_hdr);=0A = }=0A else if ( !_xcr0 )=0A ret =3D 0;=0A--- = unstable.orig/xen/arch/x86/hvm/hvm.c 2016-01-27 09:52:26.000000000 = +0100=0A+++ unstable/xen/arch/x86/hvm/hvm.c 2016-01-27 11:09:44.0000000= 00 +0100=0A@@ -2178,6 +2178,19 @@ static int hvm_save_cpu_xsave_states(str= =0A return 0;=0A }=0A =0A+/*=0A+ * Structure layout conformity checks, = documenting correctness of the cast in=0A+ * the invocation of validate_xst= ate() below.=0A+ * Leverage CONFIG_COMPAT machinery to perform this.=0A+ = */=0A+#define xen_xsave_hdr xsave_hdr=0A+#define compat_xsave_hdr = hvm_hw_cpu_xsave_hdr=0A+CHECK_FIELD_(struct, xsave_hdr, xstate_bv);=0A+CHEC= K_FIELD_(struct, xsave_hdr, xcomp_bv);=0A+CHECK_FIELD_(struct, xsave_hdr, = reserved);=0A+#undef compat_xsave_hdr=0A+#undef xen_xsave_hdr=0A+=0A = static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t= *h)=0A {=0A unsigned int vcpuid, size;=0A@@ -2233,7 +2246,7 @@ static = int hvm_load_cpu_xsave_states(str=0A h->cur +=3D desc->length;=0A =0A = err =3D validate_xstate(ctxt->xcr0, ctxt->xcr0_accum,=0A- = ctxt->save_area.xsave_hdr.xstate_bv);=0A+ = (const void *)&ctxt->save_area.xsave_hdr);=0A if ( err )=0A = {=0A printk(XENLOG_G_WARNING=0A--- unstable.orig/xen/arch/x86/xstat= e.c 2016-01-27 10:23:06.000000000 +0100=0A+++ unstable/xen/arch/x86/xst= ate.c 2016-01-27 10:48:22.000000000 +0100=0A@@ -614,17 +614,24 @@ static = bool_t valid_xcr0(u64 xcr0)=0A return !(xcr0 & XSTATE_BNDREGS) =3D=3D = !(xcr0 & XSTATE_BNDCSR);=0A }=0A =0A-int validate_xstate(u64 xcr0, u64 = xcr0_accum, u64 xstate_bv)=0A+int validate_xstate(u64 xcr0, u64 xcr0_accum,= const struct xsave_hdr *hdr)=0A {=0A- if ( (xstate_bv & ~xcr0_accum) = ||=0A+ unsigned int i;=0A+=0A+ if ( (hdr->xstate_bv & ~xcr0_accum) = ||=0A (xcr0 & ~xcr0_accum) ||=0A !valid_xcr0(xcr0) ||=0A = !valid_xcr0(xcr0_accum) )=0A return -EINVAL;=0A =0A- = if ( xcr0_accum & ~xfeature_mask )=0A+ if ( (xcr0_accum & ~xfeature_mask= ) ||=0A+ hdr->xcomp_bv )=0A return -EOPNOTSUPP;=0A =0A+ = for ( i =3D 0; i < ARRAY_SIZE(hdr->reserved); ++i )=0A+ if ( = hdr->reserved[i] )=0A+ return -EIO;=0A+=0A return 0;=0A = }=0A =0A--- unstable.orig/xen/include/asm-x86/xstate.h 2016-01-25 = 11:33:20.000000000 +0100=0A+++ unstable/xen/include/asm-x86/xstate.h = 2016-01-27 10:57:54.000000000 +0100=0A@@ -72,14 +72,13 @@ struct __attribut= e__((aligned (64))) xsa=0A };=0A } fpu_sse;=0A =0A- struct = {=0A+ struct xsave_hdr {=0A u64 xstate_bv;=0A u64 = xcomp_bv;=0A u64 reserved[6];=0A } xsave_hdr; = /* The 64-byte header */=0A =0A- struct { char x[XSTATE_YMM_S= IZE]; } ymm; /* YMM */=0A- char data[]; /* = Future new states */=0A+ char data[]; /* = Variable layout states */=0A };=0A =0A /* extended state operations = */=0A@@ -90,7 +89,8 @@ uint64_t get_msr_xss(void);=0A void xsave(struct = vcpu *v, uint64_t mask);=0A void xrstor(struct vcpu *v, uint64_t mask);=0A = bool_t xsave_enabled(const struct vcpu *v);=0A-int __must_check validate_xs= tate(u64 xcr0, u64 xcr0_accum, u64 xstate_bv);=0A+int __must_check = validate_xstate(u64 xcr0, u64 xcr0_accum,=0A+ = const struct xsave_hdr *);=0A int __must_check handle_xsetbv(u32 index, = u64 new_bv);=0A void expand_xsave_states(struct vcpu *v, void *dest, = unsigned int size);=0A void compress_xsave_states(struct vcpu *v, const = void *src, unsigned int size);=0A--- unstable.orig/xen/include/public/arch-= x86/hvm/save.h 2016-01-13 07:56:27.000000000 +0100=0A+++ unstable/xen/incl= ude/public/arch-x86/hvm/save.h 2016-01-27 11:09:20.000000000 +0100=0A@@ = -550,12 +550,11 @@ struct hvm_hw_cpu_xsave {=0A struct {=0A = struct { char x[512]; } fpu_sse;=0A =0A- struct {=0A+ struct = hvm_hw_cpu_xsave_hdr {=0A uint64_t xstate_bv; /* = Updated by XRSTOR */=0A- uint64_t reserved[7];=0A+ = uint64_t xcomp_bv; /* Updated by XRSTOR{C,S} */=0A+ = uint64_t reserved[6];=0A } xsave_hdr; /* The = 64-byte header */=0A-=0A- struct { char x[0]; } ymm; /* YMM = */=0A } save_area;=0A };=0A =0A --=__Part6A5D993E.3__= Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --=__Part6A5D993E.3__=--