xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "Paul Durrant" <paul@xen.org>,
	Xen-devel <xen-devel@lists.xenproject.org>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	"Ian Jackson" <Ian.Jackson@citrix.com>
Subject: Re: [PATCH 9/9] x86/spec-ctrl: Hide RDRAND by default on IvyBridge
Date: Tue, 16 Jun 2020 17:26:49 +0100	[thread overview]
Message-ID: <00db98fd-d268-71ae-fad1-fb59d2f1eba1@citrix.com> (raw)
In-Reply-To: <a0733b2c-da6b-e5bc-3041-de30002bcb47@suse.com>

On 16/06/2020 11:00, Jan Beulich wrote:
> On 15.06.2020 16:15, Andrew Cooper wrote:
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -512,11 +512,21 @@ The Speculation Control hardware features `srbds-ctrl`, `md-clear`, `ibrsb`,
>>  `stibp`, `ibpb`, `l1d-flush` and `ssbd` are used by default if available and
>>  applicable.  They can all be ignored.
>>  
>> -`rdrand` and `rdseed` can be ignored, as a mitigation to XSA-320 /
>> -CVE-2020-0543.  The RDRAND feature is disabled by default on certain AMD
>> -systems, due to possible malfunctions after ACPI S3 suspend/resume.  `rdrand`
>> -may be used in its positive form to override Xen's default behaviour on these
>> -systems, and make the feature fully usable.
>> +`rdrand` and `rdseed` have multiple interactions.
>> +
>> +*   For Special Register Buffer Data Sampling (SRBDS, XSA-320, CVE-2020-0543),
>> +    RDRAND and RDSEED can be ignored.
>> +
>> +    Due to the absence microcode to address SRBDS on IvyBridge hardware, the
> Nit: "... absence of microcode ..."

Fixed.

>
>> +    RDRAND feature is hidden by default for guests, unless `rdrand` is used in
>> +    its positive form.  Irrespective of the default setting here, VMs can use
>> +    RDRAND if explicitly enabled in guest config file, and VMs already using
>> +    RDRAND can migrate in.
> I'm somewhat confused by the use of "default setting" here, when at the same
> time you talk about our default behavior for guests. Aiui the two "default"
> mean different things, so I'd suggest dropping that latter "default".

Ok, done.

>
> This raises a question though: Is disabling RDRAND just for guests good
> enough? I.e. what about Xen's own uses of RDRAND? There may not be any
> problematic ones right now, but wouldn't there be a latent issue no-one is
> going to notice?

I was sorely tempted to delete all of Xen's use of RDRAND, seeing as its
not even safe to the AMD issue.

What we don't have is a "no-xen" concept for CPUID features, so I can't
stop Xen using it without hiding it from all guests, which in turn would
render the point of this series useless, and reintroduce the migration
problems we're trying to code around.

I was planning to leave the Xen uses as-are for now.

>
>> --- a/tools/libxc/xc_cpuid_x86.c
>> +++ b/tools/libxc/xc_cpuid_x86.c
>> @@ -503,6 +503,9 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
>>       */
>>      if ( restore )
>>      {
>> +        if ( test_bit(X86_FEATURE_RDRAND, host_featureset) && !p->basic.rdrand )
>> +            p->basic.rdrand = true;
> Same question as before: Why do you derive from the host feature set rather
> than the domain type's maximum one?

Answer the same as previous.

Although I do see now that this should be simplified to:

    p->basic.rdrand = test_bit(X86_FEATURE_RDRAND, host_featureset);

which I've done.

>
>> --- a/xen/arch/x86/cpuid.c
>> +++ b/xen/arch/x86/cpuid.c
>> @@ -340,6 +340,25 @@ static void __init calculate_host_policy(void)
>>      }
>>  }
>>  
>> +static void __init guest_common_default_feature_adjustments(uint32_t *fs)
>> +{
>> +    /*
>> +     * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
>> +     * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
>> +     * compensate.
>> +     *
>> +     * Mitigate by hiding RDRAND from guests by default, unless explicitly
>> +     * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
>> +     * default setting, guests can use RDRAND if explicitly enabled
>> +     * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
>> +     * previously using RDRAND can migrate in.
>> +     */
>> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
>> +         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
> This is the first time (description plus patch so far) that the issue
> gets mentioned to be for and the workaround restricted to client parts
> only. If so, I think at least the doc should say so too.

I've updated the command line doc, and patch subject.

~Andrew


  reply	other threads:[~2020-06-16 16:27 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-15 14:15 [PATCH for-4.14 0/9] XSA-320 follow for IvyBridge Andrew Cooper
2020-06-15 14:15 ` [PATCH 1/9] tools/libx[cl]: Introduce struct xc_xend_cpuid for xc_cpuid_set() Andrew Cooper
2020-06-15 14:51   ` Ian Jackson
2020-06-15 14:15 ` [PATCH 2/9] tests/cpu-policy: Confirm that CPUID serialisation is sorted Andrew Cooper
2020-06-15 14:52   ` Ian Jackson
2020-06-15 15:00     ` Andrew Cooper
2020-06-15 15:34       ` Ian Jackson
2020-06-15 16:12         ` Andrew Cooper
2020-06-16  6:51           ` Jan Beulich
2020-06-16  9:01   ` Jan Beulich
2020-06-15 14:15 ` [PATCH 3/9] tools/libx[cl]: Move processing loop down into xc_cpuid_set() Andrew Cooper
2020-06-15 14:54   ` Ian Jackson
2020-06-16  9:16   ` Jan Beulich
2020-06-16 15:58     ` Andrew Cooper
2020-06-15 14:15 ` [PATCH 4/9] tools/libx[cl]: Merge xc_cpuid_set() into xc_cpuid_apply_policy() Andrew Cooper
2020-06-15 14:55   ` Ian Jackson
2020-06-15 14:15 ` [PATCH 5/9] tools/libx[cl]: Plumb bool restore down " Andrew Cooper
2020-06-15 14:55   ` Ian Jackson
2020-06-15 14:15 ` [PATCH 6/9] x86/gen-cpuid: Distinguish default vs max in feature annotations Andrew Cooper
2020-06-15 14:15 ` [PATCH 7/9] x86/hvm: Disable MPX by default Andrew Cooper
2020-06-16  9:33   ` Jan Beulich
2020-06-16 16:15     ` Andrew Cooper
2020-06-17 10:32       ` Jan Beulich
2020-06-17 11:16         ` Andrew Cooper
2020-06-17 11:24           ` Jan Beulich
2020-06-17 11:28             ` Andrew Cooper
2020-06-17 11:41               ` Jan Beulich
2020-06-17 11:47                 ` Andrew Cooper
2020-06-15 14:15 ` [PATCH 8/9] x86/cpuid: Introduce missing feature adjustment in calculate_pv_def_policy() Andrew Cooper
2020-06-16  9:40   ` Jan Beulich
2020-06-16 16:17     ` Andrew Cooper
2020-06-15 14:15 ` [PATCH 9/9] x86/spec-ctrl: Hide RDRAND by default on IvyBridge Andrew Cooper
2020-06-16 10:00   ` Jan Beulich
2020-06-16 16:26     ` Andrew Cooper [this message]
2020-06-17 10:39       ` Jan Beulich
2020-06-17 11:21         ` Andrew Cooper
2020-06-15 17:04 ` [PATCH for-4.14 0/9] XSA-320 follow for IvyBridge Paul Durrant
2020-06-17 12:46   ` Paul Durrant
2020-06-18  7:18 ` Jan Beulich
2020-06-18  9:37   ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00db98fd-d268-71ae-fad1-fb59d2f1eba1@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=paul@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).