All of lore.kernel.org
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: further post-Meltdown-bad-aid performance thoughts
Date: Mon, 22 Jan 2018 12:33:41 +0000	[thread overview]
Message-ID: <b1bde0ac-3c33-9337-1280-4add8545f75a@citrix.com> (raw)
In-Reply-To: <5A65BC0102000078001A0ED5@prv-mh.provo.novell.com>

On 01/22/2018 09:25 AM, Jan Beulich wrote:
>>>> On 19.01.18 at 18:00, <george.dunlap@citrix.com> wrote:
>> On 01/19/2018 04:36 PM, Jan Beulich wrote:
>>>>>> On 19.01.18 at 16:43, <george.dunlap@citrix.com> wrote:
>>>> So what if instead of trying to close the "windows", we made it so that
>>>> there was nothing through the windows to see?  If no matter what the
>>>> hypervisor speculatively executed, nothing sensitive was visibile except
>>>> what a vcpu was already allowed to see,
>>>
>>> I think you didn't finish your sentence here, but I also think I
>>> can guess the missing part. There's a price to pay for such an
>>> approach though - iterating over domains, or vCPU-s of a
>>> domain (just as an example) wouldn't be simple list walks
>>> anymore. There are certainly other things. IOW - yes, and
>>> approach like this seems possible, but with all the lost
>>> performance I think we shouldn't go overboard with further
>>> hiding.
>>
>> Right, so the next question: what information *from other guests* are
>> sensitive?
>>
>> Obviously the guest registers are sensitive.  But how much of the
>> information in vcpu struct that we actually need to have "to hand" is
>> actually sensitive information that we need to hide from other VMs?
> 
> None, I think. But that's not the main aspect here. struct vcpu
> instances come and go, which would mean we'd have to
> permanently update what is or is not being exposed in the page
> tables used. This, while solvable, is going to be a significant
> burden in terms of synchronizing page tables (if we continue to
> use per-CPU ones) and/or TLB shootdown. Whereas if only the
> running vCPU's structure (and it's struct domain) are exposed,
> no such synchronization is needed (things would simply be
> updated during context switch).

I'm not sure we're actually communicating.

Correct me if I'm wrong; at the moment, under XPTI, hypercalls running
under Xen still have access to all of host memory.  To protect against
SP3, we remove almost all Xen memory from the address space before
switching to the guest.

What I'm proposing is something like this:

* We have a "global" region of Xen memory that is mapped by all
processors.  This will contain everything we consider not sensitive;
including Xen text segments, and most domain and vcpu data.  But it will
*not* map all of host memory, nor have access to sensitive data, such as
vcpu register state.

* We have per-cpu "local" regions.  In this region we will map,
on-demand, guest memory which is needed to perform current operations.
(We can consider how strictly we need to unmap memory after using it.)
We will also map the current vcpu's registers.

* On entry to a 64-bit PV guest, we don't change the mapping at all.

Now, no matter what the speculative attack -- SP1, SP2, or SP3 -- a vcpu
can only access its own RAM and registers.  There's no extra overhead to
context switching into or out of the hypervisor.

Given that, I don't understand what the following comments mean:

"There's a price to pay for such an approach though - iterating over
domains, or vCPU-s of a domain (just as an example) wouldn't be simple
list walks anymore."

If we remove sensitive information from the domain and vcpu structs,
then any bit of hypervisor code can iterate over domain and vcpu structs
at will; only if they actually need to read or write sensitive data will
they have to perform an expensive map/unmap operation.  But in general,
to read another vcpu's registers you already need to do a vcpu_pause() /
vcpu_unpause(), which involves at least two IPIs (with one
spin-and-wait), so it doesn't seem like that should add a lot of extra
overhead.

"struct vcpu instances come and go, which would mean we'd have to
permanently update what is or is not being exposed in the page tables
used. This, while solvable, is going to be a significant burden in terms
of synchronizing page tables (if we continue to use per-CPU ones) and/or
TLB shootdown."

I don't understand what this is referring to in my proposed plan above.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-01-22 12:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-19 14:37 further post-Meltdown-bad-aid performance thoughts Jan Beulich
2018-01-19 15:43 ` George Dunlap
2018-01-19 16:36   ` Jan Beulich
2018-01-19 17:00     ` George Dunlap
2018-01-22  9:25       ` Jan Beulich
2018-01-22 12:33         ` George Dunlap [this message]
2018-01-22 13:30           ` Jan Beulich
2018-01-22 15:15             ` George Dunlap
2018-01-22 17:04               ` Jan Beulich
2018-01-22 17:11                 ` George Dunlap
2018-01-22 17:44   ` Matt Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b1bde0ac-3c33-9337-1280-4add8545f75a@citrix.com \
    --to=george.dunlap@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.