All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/4] Adding Virtual Memory Fuses to Xen
@ 2022-12-13 19:48 Smith, Jackson
  2022-12-13 19:50 ` [RFC 1/4] Add VMF Hypercall Smith, Jackson
                   ` (4 more replies)
  0 siblings, 5 replies; 33+ messages in thread
From: Smith, Jackson @ 2022-12-13 19:48 UTC (permalink / raw)
  To: Smith, Jackson
  Cc: Brookes, Scott, Xen-devel, Stefano Stabellini, Julien Grall,
	bertrand.marquis, jbeulich, Andrew Cooper, Roger Pau Monné,
	George Dunlap, demi, Daniel P. Smith, christopher.w.clark

Hi Xen Developers,

My team at Riverside Research is currently spending IRAD funding
to prototype next-generation secure hypervisor design ideas
on Xen. In particular, we are prototyping the idea of Virtual
Memory Fuses for Software Enclaves, as described in this paper:
https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note that
that paper talks about OS/Process while we have implemented the idea
for Hypervisor/VM.

Our goal is to emulate something akin to Intel SGX or AMD SEV,
but using only existing virtual memory features common in all
processors. The basic idea is not to map guest memory into the
hypervisor so that a compromised hypervisor cannot compromise
(e.g. read/write) the guest. This idea has been proposed before,
however, Virtual Memory Fuses go one step further; they delete the
hypervisor's mappings to its own page tables, essentially locking
the virtual memory configuration for the lifetime of the system. This
creates what we call "Software Enclaves", ensuring that an adversary
with arbitrary code execution in the hypervisor STILL cannot read/write
guest memory.

With this technique, we protect the integrity and confidentiality of
guest memory. However, a compromised hypervisor can still read/write
register state during traps, or refuse to schedule a guest, denying
service. We also recognize that because this technique precludes
modifying Xen's page tables after startup, it may not be compatible
with all of Xen's potential use cases. On the other hand, there are
some uses cases (in particular statically defined embedded systems)
where our technique could be adopted with minimal friction.

With this in mind our goal is to work with the Xen community to
upstream this work as an optional feature. At this point, we have
a prototype implementation of VMF on Xen (the contents of this RFC
patch series) that supports dom0less guests on arm 64. By sharing
our prototype, we hope to socialize our idea, gauge interest, and
hopefully gain useful feedback as we work toward upstreaming.

** IMPLEMENTATION **
In our current setup we have a static configuration with dom0 and
one or two domUs. Soon after boot, Dom0 issues a hypercall through
the xenctrl interface to blow the fuse for the domU. In the future,
we could also add code to support blowing the fuse automatically on
startup, before any domains are un-paused.

Our Xen/arm64 prototype creates Software Enclaves in two steps,
represented by these two functions defined in xen/vmf.h:
void vmf_unmap_guest(struct domain *d);
void vmf_lock_xen_pgtables(void);

In the first, the Xen removes mappings to the guest(s) On arm64, Xen
keeps a reference to all of guest memory in the directmap. Right now,
we simply walk all of the guest second stage tables and remove them
from the directmap, although there is probably a more elegant method
for this.

Second, the Xen removes mappings to its own page tables.
On arm64, this also involves manipulating the directmap. One challenge
here is that as we start to unmap our tables from the directmap,
we can't use the directmap to walk them. Our solution here is also
bit less elegant, we temporarily insert a recursive mapping and use
that to remove page table entries.

** LIMITATIONS and other closing thoughts **
The current Xen code has obviously been implemented under the
assumption that new pages can be mapped, and that guest virtual
addresses can be read, so this technique will break some Xen
features. However, in the general case (in particular for static
workloads where the number of guest's is not changed after boot)
we've seen that Xen rarely needs to access guest memory or adjust
its page tables.

We see a lot of potential synergy with other Xen initiatives like
Hyperlaunch for static domain allocation, or SEV support driving new
hypercall interfaces that don't require reading guest memory. These
features would allow VMF (Virtual Memory Fuses) to work with more
configurations and architectures than our current prototype, which
only supports static configurations on ARM 64.

We have not yet studied how the prototype VMF implementation impacts
performance. On the surface, there should be no significant changes.
However, cache effects from splitting the directmap superpages could
introduce a performance cost.

Additionally, there is additional latency introduced by walking all the
tables to retroactively remove guest memory. This could be optimized
by reworking the Xen code to remove the directmap. We've toyed with
the idea, but haven't attempted it yet.

Finally, our initial testing suggests that Xen never reads guest memory
(in a static, non-dom0-enchanced configuration), but have not really
explored this thoroughly.
We know at least these things work:
	Dom0less virtual serial terminal
	Domain scheduling
We are aware that these things currently depend on accessible guest
memory:
	Some hypercalls take guest pointers as arguments
	Virtualized MMIO on arm needs to decode certain load/store
	instructions

It's likely that other Xen features require guest memory access.

Also, there is currently a lot of debug code that isn't needed for
normal operation, but assumes the ability to read guest memory or
walk page tables in an exceptional case. The xen codebase will need
to be audited for these cases, and proper guards inserted so this
code doesn't pagefault.

Thanks for allowing us to share our work with you. We are really
excited about it, and we look forward to hearing your feedback. We
figure those working with Xen on a day to day basis will likely
uncover details we have overlooked.

Jackson


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2023-01-08 16:30 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-13 19:48 [RFC 0/4] Adding Virtual Memory Fuses to Xen Smith, Jackson
2022-12-13 19:50 ` [RFC 1/4] Add VMF Hypercall Smith, Jackson
2022-12-14  9:29   ` Jan Beulich
2022-12-13 19:53 ` [RFC 2/4] Add VMF tool Smith, Jackson
2022-12-13 19:54 ` [RFC 3/4] Add xen superpage splitting support to arm Smith, Jackson
2022-12-13 21:15   ` Julien Grall
2022-12-13 22:17     ` Demi Marie Obenour
2022-12-13 23:07       ` Julien Grall
2022-12-14  1:38         ` Demi Marie Obenour
2022-12-14  9:09           ` Julien Grall
2022-12-13 19:55 ` [RFC 4/4] Implement VMF for arm64 Smith, Jackson
2022-12-13 20:55 ` [RFC 0/4] Adding Virtual Memory Fuses to Xen Julien Grall
2022-12-13 22:22   ` Demi Marie Obenour
2022-12-13 23:05     ` Julien Grall
2022-12-14  1:28       ` Demi Marie Obenour
2022-12-14 14:06       ` Julien Grall
2022-12-16 11:58     ` Julien Grall
2022-12-15 19:27   ` Smith, Jackson
2022-12-15 22:00     ` Julien Grall
2022-12-16  1:46       ` Stefano Stabellini
2022-12-16  8:38         ` Julien Grall
2022-12-20 22:17           ` Smith, Jackson
2022-12-20 22:30             ` Demi Marie Obenour
2022-12-22  0:53               ` Stefano Stabellini
2022-12-22  4:33                 ` Demi Marie Obenour
2022-12-22  9:31                 ` Julien Grall
2022-12-22 21:28                   ` Stefano Stabellini
2023-01-08 16:30                     ` Julien Grall
2022-12-22  0:38             ` Stefano Stabellini
2022-12-22  9:52               ` Julien Grall
2022-12-22 10:14                 ` Demi Marie Obenour
2022-12-22 10:21                   ` Julien Grall
2022-12-22 10:28                     ` Demi Marie Obenour

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.