All of lore.kernel.org
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: further post-Meltdown-bad-aid performance thoughts
Date: Fri, 19 Jan 2018 15:43:26 +0000	[thread overview]
Message-ID: <6134e80b-8fd0-8a7d-61c8-df5892685c63@citrix.com> (raw)
In-Reply-To: <5A6210B502000078001A06A5@prv-mh.provo.novell.com>

On 01/19/2018 02:37 PM, Jan Beulich wrote:
> All,
> 
> along the lines of the relatively easy first step submitted yesterday,
> I've had some further thoughts in that direction. A fundamental
> thing for this is of course to first of all establish what kind of
> information we consider safe to expose (in the long run) to guests.
> 
> The current state of things is deemed incomplete, yet despite my
> earlier inquiries I haven't heard back any concrete example of
> information, exposure of which does any harm. While it seems to be
> generally believed that large parts of the Xen image should not be
> exposed, it's not all that clear to me why that would be. I could
> agree with better hiding writable data parts of it, just to be on the
> safe side (I'm unaware of statically allocated data though which
> might carry any secrets), but what would be the point of hiding
> code and r/o data? Anyone wanting to know their contents can
> simply obtain the Xen binary for their platform.

This tails into a discussion I think we should have about dealing with
SP1, and also future-proofing against future speculative execution attacks.

Right now there are "windows" through which people can look using SP1-3,
which we are trying to close.  SP1's "window" is the guest -> hypervisor
virtual address space (hence XPTI, separating the address spaces).
SP2's "window" is branch-target-poisoned gadgets (hence using retpoline
and various techniques to prevent branch target poisoning).  SP1's
"window" is array boundary privilege checks, hence Linux's attempts to
prevent speculation over privilege checks by using lfence or other
tricks[1].

But there will surely be more attacks like this (in fact, there may
already be some in the works[2]).

So what if instead of trying to close the "windows", we made it so that
there was nothing through the windows to see?  If no matter what the
hypervisor speculatively executed, nothing sensitive was visibile except
what a vcpu was already allowed to see,

At a first cut, there are two kinds of data inside the hypervisor which
might be interesting to an attacker:

1) Guest data: private encryption keys, secret data, &c
 1a. Direct copies of guest data
 1b. Data from which an attacker can infer guest data

2) Hypervisor data that makes it easier to perform other exploits.  For
instance, the layout of memory, the exact address of certain dynamic
data structures,  &c.

Personally I don't think we should worry too much about #2.  The main
thing we should be focusing on is 1a and 1b.

Another potential consideration is information about what monitoring
tools might be deployed against the attacker; an attacker might act
differently if she knew that VMI was being used than otherwise.  But I
doubt that the presence of VMI is really going to be able to be kept
secret very well; if I had a choice between obfuscating VMI and
recovering performance lost to SP* mitigations, I think I'd go for
performance.

> The reason I bring this up is because further steps in the direction
> of recovering performance would likely require as a prerequisite
> exposure of further data, first and foremost struct vcpu and
> struct domain for the currently active vCPU. Once again I'm not
> aware of any secrets living there. Another item might need to be
> the local CPU's per-CPU data.

A quick glance through struct vcpu doesn't turn up anything obvious.  If
we were worried about RowHammer, exposing the MFNs of various values
might be worth hiding.

Maybe it would be better to start "whitelisting" state that was believed
to be safe, rather than blacklisting state known to be dangerous.

On the whole I agree with Jan's approach, to start exposing, for
performance reasons, bits of state we believe to be safe, and then deal
with attacks as they come up.

 -George

[1] https://lwn.net/SubscriberLink/744287/02dd9bc503409ca3/
[2] skyfallattack.com



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-01-19 15:43 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-19 14:37 further post-Meltdown-bad-aid performance thoughts Jan Beulich
2018-01-19 15:43 ` George Dunlap [this message]
2018-01-19 16:36   ` Jan Beulich
2018-01-19 17:00     ` George Dunlap
2018-01-22  9:25       ` Jan Beulich
2018-01-22 12:33         ` George Dunlap
2018-01-22 13:30           ` Jan Beulich
2018-01-22 15:15             ` George Dunlap
2018-01-22 17:04               ` Jan Beulich
2018-01-22 17:11                 ` George Dunlap
2018-01-22 17:44   ` Matt Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6134e80b-8fd0-8a7d-61c8-df5892685c63@citrix.com \
    --to=george.dunlap@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.