All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	"Wei Liu" <wl@xen.org>, "Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] x86/boot: Clean up the trampoline transition into Long mode
Date: Fri, 3 Jan 2020 14:25:43 +0000	[thread overview]
Message-ID: <1628e07b-4599-e885-df5b-8b71872ca291@citrix.com> (raw)
In-Reply-To: <5f6fff0a-d677-11e5-07ad-7e0250d29477@suse.com>

On 03/01/2020 13:52, Jan Beulich wrote:
> On 03.01.2020 14:44, Andrew Cooper wrote:
>> On 03/01/2020 13:36, Jan Beulich wrote:
>>> On 02.01.2020 15:59, Andrew Cooper wrote:
>>>> @@ -111,26 +109,6 @@ trampoline_protmode_entry:
>>>>  start64:
>>>>          /* Jump to high mappings. */
>>>>          movabs  $__high_start, %rdi
>>>> -
>>>> -#ifdef CONFIG_INDIRECT_THUNK
>>>> -        /*
>>>> -         * If booting virtualised, or hot-onlining a CPU, sibling threads can
>>>> -         * attempt Branch Target Injection against this jmp.
>>>> -         *
>>>> -         * We've got no usable stack so can't use a RETPOLINE thunk, and are
>>>> -         * further than disp32 from the high mappings so couldn't use
>>>> -         * JUMP_THUNK even if it was a non-RETPOLINE thunk.  Furthermore, an
>>>> -         * LFENCE isn't necessarily safe to use at this point.
>>>> -         *
>>>> -         * As this isn't a hotpath, use a fully serialising event to reduce
>>>> -         * the speculation window as much as possible.  %ebx needs preserving
>>>> -         * for __high_start.
>>>> -         */
>>>> -        mov     %ebx, %esi
>>>> -        cpuid
>>>> -        mov     %esi, %ebx
>>>> -#endif
>>>> -
>>>>          jmpq    *%rdi
>>> I can see this being unneeded when running virtualized, as you said
>>> in reply to Wei. However, for hot-onlining (when other CPUs may run
>>> random vCPU-s) I don't see how this can safely be dropped. There's
>>> no similar concern for S3 resume, as thaw_domains() happens only
>>> after enable_nonboot_cpus().
>> I covered that in the same reply.  Any guest which can use branch target
>> injection against this jmp can also poison the regular branch predictor
>> and get at data that way.
> Aren't you implying then that retpolines could also be dropped?

No.  It is a simple risk vs complexity tradeoff.

Guests running on a sibling *can already* attack this branch with BTI,
because CPUID isn't a fix to bad BTB speculation, and the leakage gadget
need only be a single instruction.

Such a guest can also attack Xen in general with Spectre v1.

As I said - this was introduced because of paranoia, back while the few
people who knew about the issues (only several hundred at the time) were
attempting to figure out what exactly a speculative attack looked like,
and was applying duct tape to everything suspicious because we had 0
time to rewrite several core pieces of system handling.

>> Once again, we get to CPU Hotplug being an unused feature in practice,
>> which is completely evident now with Intel MCE behaviour.
> What does Intel's MCE behavior have to do with whether CPU hotplug
> (or hot-onlining) is (un)used in practice?

The logical consequence of hotplug breaking MCEs.

If hotplug had been used in practice, the MCE behaviour would have come
to light much sooner, when MCEs didn't work in practice.

Given that MCEs really did work in practice even before the L1TF days,
hotplug wasn't in common-enough use for anyone to notice the MCE behaviour.

>> A guest can't control/guess when a hotplug even might occur, or where
>> exactly this branch is in memory (after all - it is variable based on
>> the position of the trampoline), and core scheduling mitigates the risk
>> entirely.
> "... will mitigate ..." - it's experimental up to now, isn't it?

Core scheduling ought to prevent the problem entirely.  The current code
is not safe in the absence of core scheduling.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2020-01-03 14:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-02 14:59 [Xen-devel] [PATCH] x86/boot: Clean up the trampoline transition into Long mode Andrew Cooper
2020-01-02 16:55 ` Wei Liu
2020-01-02 17:20   ` Andrew Cooper
2020-01-02 18:45     ` Wei Liu
2020-01-03 13:36 ` Jan Beulich
2020-01-03 13:44   ` Andrew Cooper
2020-01-03 13:52     ` Jan Beulich
2020-01-03 14:25       ` Andrew Cooper [this message]
2020-01-03 14:34         ` Jan Beulich
2020-01-03 18:55           ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1628e07b-4599-e885-df5b-8b71872ca291@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.