qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Tobin Feldman-Fitzthum <tobin@linux.ibm.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: ashish.kalra@amd.com, brijesh.singh@amd.com, jejb@linux.ibm.com,
	jon.grimm@amd.com, tobin@ibm.com, qemu-devel@nongnu.org,
	dovmurik@linux.vnet.ibm.com, Dov.Murik1@il.ibm.com,
	pbonzini@redhat.com
Subject: Re: RFC: Fast Migration for SEV and SEV-ES - blueprint and proof of concept
Date: Fri, 30 Oct 2020 17:10:02 -0400	[thread overview]
Message-ID: <5bec748d1e6ec171ef0d226c361edde5@linux.vnet.ibm.com> (raw)
In-Reply-To: <20201030200202.GA19776@work-vm>

On 2020-10-30 16:02, Dr. David Alan Gilbert wrote:
> * Tobin Feldman-Fitzthum (tobin@linux.ibm.com) wrote:
>> Hello,
>> 
>> Dov Murik, James Bottomley, Hubertus Franke, and I have been working 
>> on a
>> plan for fast live migration with SEV and SEV-ES. We just posted an 
>> RFC
>> about it to the edk2 list. It includes a proof-of-concept for what we 
>> feel
>> to be the most difficult part of fast live migration with SEV-ES.
>> 
>> https://edk2.groups.io/g/devel/topic/77875297
>> 
>> This was posted to the edk2 list because OVMF is one of the main 
>> components
>> of our approach to live migration. With SEV/SEV-ES the hypervisor 
>> generally
>> does not have access to guest memory/CPU state. We propose a Migration
>> Handler in OVMF that runs inside the guest and assists the hypervisor 
>> with
>> migration. One major challenge to this approach is that for SEV-ES 
>> this
>> Migration Handler must be able to set the CPU state of the target to 
>> the CPU
>> state of the source while the target is running. Our demo shows that 
>> this is
>> feasible.
>> 
>> While OVMF is a major component of our approach, QEMU obviously has a 
>> huge
>> part to play as well. We want to start thinking about the best way to
>> support fast live migration for SEV and for encrypted VMs in general. 
>> A
>> handful of issues are starting to come into focus. For instance, the 
>> target
>> VM needs to be started before we begin receiving pages from the source 
>> VM.
> 
> That might not be that hard to fudge; we already start the VM in
> postcopy mode before we have all of RAM; now in that we have to do 
> stuff
> to protect the RAM because we expect the guest to access it; in this
> case you possibly don't need to.
> 
I'll need to think a bit about this. The basic setup is that we want the
VM to boot into OVMF and initialize the Migration Handler. Then QEMU 
will start
receiving encrypted pages and passing them into OVMF via some shared 
address.
The Migration Handler will decrypt the pages and put them into place,
overwriting everything around it. The Migration Handler will be a 
runtime
driver so it should avoid overwriting itself.

>> We will also need to start an extra vCPU for the Migration Handler to 
>> run
>> on. We are currently working on a demo of end-to-end live migration 
>> for SEV
>> (non-ES) VMs that should help crystallize these issues. It should be 
>> on the
>> list around the end of the year. For now, check out our other post, 
>> which
>> has a lot more information and let me know if you have any thoughts.
> 
> I don't think I understood why you'd want to map the VMSA, or why it
> would be encrypted in such a way it was useful to you.
> 
We map the VMSA into the guest because it gives us an easy way to
securely send the CPU state to the target.

Each time there is a VMExit, the CPU state of the guest
is stored in the VMSA. Although the VMSA is encrypted with the guest's 
key,
it usually isn't mapped into the guest. During migration we securely
transfer guest memory from source to destination (the Migration Handler
on source and target share a key which they use to encrypt/decrypt the
pages). If we tweak the NPT to add the VMSA to the guest, the VMSA will 
be
sent along with the all the other pages.

There are some details with the timing. We'll need to force a VMExit 
once
we get convergence and re-send the VMSA page to make sure it's up to 
date.
Once the target has all the pages, the Migration Handler can just read 
the
CPU state from some known address.

-Tobin

> Dave
> 
> 
>> -Tobin
>> 


      reply	other threads:[~2020-10-30 21:11 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-30 17:53 RFC: Fast Migration for SEV and SEV-ES - blueprint and proof of concept Tobin Feldman-Fitzthum
2020-10-30 20:02 ` Dr. David Alan Gilbert
2020-10-30 21:10   ` Tobin Feldman-Fitzthum [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5bec748d1e6ec171ef0d226c361edde5@linux.vnet.ibm.com \
    --to=tobin@linux.ibm.com \
    --cc=Dov.Murik1@il.ibm.com \
    --cc=ashish.kalra@amd.com \
    --cc=brijesh.singh@amd.com \
    --cc=dgilbert@redhat.com \
    --cc=dovmurik@linux.vnet.ibm.com \
    --cc=jejb@linux.ibm.com \
    --cc=jon.grimm@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=tobin@ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).