xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Shuai Ruan <shuai.ruan@linux.intel.com>
Cc: andrew.cooper3@citrix.com, keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [V4] x86/xsaves: fix overwriting between non-lazy/lazy xsaves
Date: Fri, 11 Mar 2016 03:16:05 -0700	[thread overview]
Message-ID: <56E2A8F502000078000DB869@prv-mh.provo.novell.com> (raw)
In-Reply-To: <20160311064516.GA11162@shuai.ruan@linux.intel.com>

>>> On 11.03.16 at 07:45, <shuai.ruan@linux.intel.com> wrote:
> On Thu, Mar 10, 2016 at 02:30:34AM -0700, Jan Beulich wrote:
>> > --- a/xen/arch/x86/xstate.c
>> > +++ b/xen/arch/x86/xstate.c
>> > @@ -165,7 +165,7 @@ void expand_xsave_states(struct vcpu *v, void *dest, unsigned int size)
>> >      u64 xstate_bv = xsave->xsave_hdr.xstate_bv;
>> >      u64 valid;
>> >  
>> > -    if ( !cpu_has_xsaves && !cpu_has_xsavec )
>> > +    if ( !(xsave->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) )
>> >      {
>> >          memcpy(dest, xsave, size);
>> >          return;
>> 
>> This one looks correct, but ...
>> 
>> > @@ -206,7 +206,7 @@ void compress_xsave_states(struct vcpu *v, const void 
>> > *src, unsigned int size)
>> >      u64 xstate_bv = ((const struct xsave_struct *)src)->xsave_hdr.xstate_bv;
>> >      u64 valid;
>> >  
>> > -    if ( !cpu_has_xsaves && !cpu_has_xsavec )
>> > +    if ( !(xsave->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) )
>> >      {
>> >          memcpy(xsave, src, size);
>> >          return;
>> 
>> ... how can this one be? You are in the process of compressing
>> known uncompressed input.
> I think this one is corret, here this check means whether we use
> xsaves in xen or not (actually when we use xsaves in xen 
> xsave->xsave_hdr.xcomp_bv will set XSTATE_COMPACTION_ENABLED).
> For more clearly, I can add 
> if ( !(xsave->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) &&
>      !xsave_area_compressed(src) )

Hmm, my reply indeed was slightly wrong, as you're not looking at
the incoming guest data here. But looking at the format the data
is currently in doesn't seem very reasonable either (even if maybe
it is correct). Hence, together with this ...

> But I do think !xsave_area_compressed(src) is useless. 
> There is a "ASSERT(!xsave_area_compressed(src))" follow "if ()".

... I think you should use !xsave_area_compressed(src) and delete
the ASSERT().

>> > @@ -370,7 +368,7 @@ void xrstor(struct vcpu *v, uint64_t mask)
>> >                         "   .previous\n" \
>> >                         _ASM_EXTABLE(1b, 2b), \
>> >                         ".byte " pfx "0x0f,0xc7,0x1f\n", \
>> > -                       X86_FEATURE_XSAVES, \
>> > +                       X86_FEATURE_XSAVE_COMPACT, \
>> >                         ASM_OUTPUT2([mem] "+m" (*ptr), [faults] "+g" (faults)), \
>> >                         [lmask] "a" (lmask), [hmask] "d" (hmask), \
>> >                         [ptr] "D" (ptr))
>> 
>> I don't think you can stick to alternative patching here - whether
>> to use XRSTORS now depends on what state is to be restored.
>> 
> X86_FEATURE_XSAVE_COMPACT is confusing. I will change
> X86_FEATURE_XSAVE_COMPACT -> X86_FEATURE_USE_XSAVES
> Then, XRSTORS in the alternative patch can depend on 
> X86_FEATURE_USE_XSAVES. 

I don't think this is what we want. In no case is this what I have
been asking for (which also applies to the remainder of your reply).
Just to re-iterate: Code outside of the code xsave() / xrstor()
functions should not be concerned at all what specific save and
restore instructions are being used. All it needs to care about is
to know what layout the data is in, and whether compaction or
expansion is needed while transferring state from / to a guest.

The fact that we introduce a synthetic feature here is solely to
satisfy the alternative instruction patching mechanism (and it
could be dropped if both the save and restore paths came to
use further conditionals, which may well be desirable - I think I
had suggested this for one of the two paths already). And
perhaps it was a mistake to scatter around the setting of
XSTATE_COMPACTION_ENABLED.

May I ask that you take a little step back and think about what
our needs here really are? For this please consider that we want
to save/restore state with as little overhead as possible (i.e. it
may be warranted to make the choice of instruction depend on
the set of components that need saving/restoring, rather than
just the availability of certain instructions). And that choice of
instruction(s) should be as transparent to the rest of the
hypervisor as possible. Which for example means ...

>> Or maybe (to amend the first comment above)
>> "using_xsave_compact" is actually the wrong term now, and this
>> really needs to become "using_xsaves" (in which case the change
>> suggested in that first comment wouldn't be needed anymore). In
> The term using_xsave_compact is confusing(actually here using_xsave_compact
> means using_xsaves). Will change using_xsave_compact -> using_xsaves.

... that "using_xsaves" is not what the rest of the hypervisor is
in need of knowing/checking. All that other code a most needs to
know/check whether the state is / needs to be in compacted form.

Jan

>> the end, at least the code outside of xstate.c should be in a state
>> where xstate.c's choice of whether to use XSAVEC doesn't matter
> XSAVEC? 
> Oh, I now realise that I simply drop xsavec support code is
> too much of a step backwards(what you want here is using a synthetic CPU 
> feature X86_FEATRUE_USE_XSAVEC and using_xsavec to disable xsavec like
> xsaves, code depend on cpu_has_xsavec will depend on useing_xsavec).
> The code should be ok even if we use xsavc in xen.  
> Is that what you mean ?
>> (and ideally this would also extend to all code in that file except
>> for the relevant parts of xsave()).
> If I understand you clearly (my comments above is right), I think we can
> also add xsavec aternative patching depend on X86_FEATRUE_USE_XSAVEC 
> in xsave(), and just keep X86_FEATRUE_USE_XSAVEC clear in x86_cap.
> 
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-03-11 10:16 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-10  8:22 [V4] x86/xsaves: fix overwriting between non-lazy/lazy xsaves Shuai Ruan
2016-03-10  9:30 ` Jan Beulich
2016-03-11  6:45   ` Shuai Ruan
     [not found]   ` <20160311064516.GA11162@shuai.ruan@linux.intel.com>
2016-03-11 10:16     ` Jan Beulich [this message]
2016-03-15  9:40       ` Shuai Ruan
     [not found]       ` <20160315094037.GA4682@shuai.ruan@linux.intel.com>
2016-03-15 13:33         ` Jan Beulich
2016-03-16  9:35           ` Shuai Ruan
     [not found]           ` <20160316093436.GA3531@shuai.ruan@linux.intel.com>
2016-03-16 10:08             ` Jan Beulich
2016-03-16 10:51               ` Shuai Ruan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56E2A8F502000078000DB869@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=keir@xen.org \
    --cc=shuai.ruan@linux.intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).