xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: Do we need to do anything about "dead µops?"
       [not found] <CALCETrXRvhqw0fibE6qom3sDJ+nOa_aEJQeuAjPofh=8h1Cujg@mail.gmail.com>
@ 2021-05-01 17:35 ` Andrew Cooper
  0 siblings, 0 replies; only message in thread
From: Andrew Cooper @ 2021-05-01 17:35 UTC (permalink / raw)
  To: Andy Lutomirski, X86 ML, LKML, David Kaplan, David Woodhouse,
	Josh Poimboeuf, Kees Cook, Jann Horn, xen-devel

On 01/05/2021 17:26, Andy Lutomirski wrote:
> Hi all-
>
> The "I See Dead µops" paper that is all over the Internet right now is
> interesting, and I think we should discuss the extent to which we
> should do anything about it.  I think there are two separate issues:
>
> First, should we (try to) flush the µop cache across privilege
> boundaries?  I suspect we could find ways to do this, but I don't
> really see the point.  A sufficiently capable attacker (i.e. one who
> can execute their own code in the dangerous speculative window or one
> who can find a capable enough string of gadgets) can put secrets into
> the TLB, various cache levels, etc.  The µop cache is a nice piece of
> analysis, but I don't think it's qualitatively different from anything
> else that we don't flush.  Am I wrong?
>
> Second, the authors claim that their attack works across LFENCE.  I
> think that this is what's happening:
>
> load secret into %rax
> lfence
> call/jmp *%rax
>
> As I understand it, on CPUs from all major vendors, the call/jmp will
> gladly fetch before lfence retires, but the address from which it
> fetches will come from the branch predictors, not from the
> speculatively computed value in %rax.

The vendor-provided literature on pipelines (primarily, the optimisation
guides) has the register file down by the execute units, and not near
the frontend.  Letting the frontend have access to the register value is
distinctly non-trivial.

> So this is nothing new.  If the
> kernel leaks a secret into the branch predictors, we have already
> mostly lost, although we have a degree of protection from the various
> flushes we do.  In other words, if we do:
>
> load secret into %rax
> <-- non-speculative control flow actually gets here
> lfence
> call/jmp *%rax
>
> then we will train our secret into the predictors but also leak it
> into icache, TLB, etc, and we already lose.  We shouldn't be doing
> this in a way that matters.  But, if we do:
>
> mispredicted branch
> load secret into %rax
> <-- this won't retire because the branch was mispredicted
> lfence
> <-- here we're fetching but not dispatching
> call/jmp *%rax
>
> then the leak does not actually occur unless we already did the
> problematic scenario above.
>
> So my tentative analysis is that no action on Linux's part is required.
>
> What do you all think?

Everything here seems to boil down to managing to encode the secret in
the branch predictor state, then managing to recover it via the uop
cache sidechannel.

It is well known and generally understood that once your secret is in
the branch predictor, YHAL.  Code with that property was broken before
this paper, is still broken after this paper, and needs fixing irrespective.

Viewed in these terms, I don't see what security improvement is gained
from trying to flush the uop cache.

I tentatively agree with your conclusion, that no specific action
concerning the uop cache is required.

~Andrew



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-05-01 17:35 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CALCETrXRvhqw0fibE6qom3sDJ+nOa_aEJQeuAjPofh=8h1Cujg@mail.gmail.com>
2021-05-01 17:35 ` Do we need to do anything about "dead µops?" Andrew Cooper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).