All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 00/12] Confidential guest-assisted live migration
@ 2021-08-23 14:16 Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 01/12] migration: Add helpers to save confidential RAM Dov Murik
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

This is an RFC series for fast migration of confidential guests using an
in-guest migration helper that lives in OVMF.  QEMU VM live migration
needs to read source VM's RAM and write it in the target VM; this
mechanism doesn't work when the guest memory is encrypted or QEMU is
prevented from reading it in another way.  In order to support live
migration in such scenarios, we introduce an in-guest migration helper
which can securely extract RAM content from the guest in order to send
it to the target.  The migration helper is implemented as part of the
VM's firmware in OVMF.

We've implemented and tested this on AMD SEV, but expect most of the
processes can be used with other technologies that prevent direct access
of hypervisor to the guest's memory.  Specifically, we don't use SEV's
PSP migration commands (SEV_SEND_START, SEV_RECEIVE_START, etc) at all;
but note that the mirror VM relies on KVM_CAP_VM_COPY_ENC_CONTEXT_FROM
to shared the SEV ASID with the main VM.

Corresponding RFC patches for OVMF have been posted by Tobin
Feldman-Fitzthum on edk2-devel [1].  Those include the crux of the
migration helper: a mailbox protocol over a shared memory page which
allows communication between QEMU and the migration helper.  In the
source VM this is used to read a page and encrypt it for transport; in
the target it is used to decrypt the incoming page and storing the
content in the correct address in the guest memory.  All encryption and
decryption operations occur inside the trusted context in the VM, and
therefore the VM's memory plaintext content is never accessible to the
hosts participating in the migration.

In order to allow OVMF to run the migration helper in parallel to the
guest OS, we use a mirror VM [3], which shares the same memory mapping
and SEV ASID as the main VM but has its own run loop.  To start the
mirror vcpu and the migration handler, we added a temporary
start-migration-handler QMP command; this will be removed in a future
version to run as part of the migrate QMP command.

In the target VM we need the migration handler running to receive
incoming RAM pages; to achieve that, we boot the VM into OVMF with a
special fw_cfg value that causes OVMF to not boot the guest OS; we then
allow QEMU to receive an incoming migration by issuing a new
start-migrate-incoming QMP command.

The confidential RAM migration requires checking whether a given guest
RAM page is encrypted or not.  This is achieved using SEV shared regions
list tracking, which is implemented as part the SEV live migration patch
series [2].  This feature tracks hypercalls from OVMF and guest Linux to
report changes of page encryption status so that QEMU has an up-to-date
view of which memory regions are shared and which are encrypted.

We left a few unfinished edges in this RFC but decided to publish it to
start the commmunity discussion.  TODOs:

1. QMP commands start-migration-handler and start-migrate-incoming are
   developer tools and should be performed automatically.
2. The entry point address of the in-guest migration handler and its GDT
   are currently hard-coded in QEMU (patch 8); instead they should be
   discovered using pc_system_ovmf_table_find.  Same applies for the
   mailbox address (patch 1).
3. For simplicity, this patch series forces the use of the 
   guest-assisted migration instead of the SEV PSP-based migration. 
   Ideally we might want the user to choose the desired mode using
   migrate-set-parameters or a similar mechanism.
4. There is currently no discovery protocol between QEMU and OVMF to
   verify that OVMF indeed supports in-guest migration handler.


List of patches in this series:

1-3: introduce new confidtial RAM migration functions which communicate
     with the migration helper.
4-6: use the new MH communication functions when migrating encrypted RAM
     pages
7-9: allow starting migration handler on mirror vcpu with QMP command 
     start-migration-handler
10:  introduce the start-migrate-incoming QMP command to switch the
     target into accepting the incoming migration.
11:  fix devices issues when loading state into a live VM
12:  add documentation


This patch series is based on top of:

1. Add SEV guest live migration support, from Ashish Kalra [2]
2. Support for mirror VM, from Ashish Kalra [3]

[1] https://edk2.groups.io/g/devel/message/79517
[2] https://lore.kernel.org/qemu-devel/cover.1628076205.git.ashish.kalra@amd.com/
[3] https://lore.kernel.org/qemu-devel/cover.1629118207.git.ashish.kalra@amd.com/


Changes from RFC v1:
 - Use the an SEV mirror VM for the migation handler (instead of
   auxilliary vcpus)

RFC v1:
https://lore.kernel.org/qemu-devel/20210302204822.81901-1-dovmurik@linux.vnet.ibm.com/


Dov Murik (12):
  migration: Add helpers to save confidential RAM
  migration: Add helpers to load confidential RAM
  migration: Introduce gpa_inside_migration_helper_shared_area
  migration: Save confidential guest RAM using migration helper
  migration: Load confidential guest RAM using migration helper
  migration: Skip ROM, non-RAM, and vga.vram memory region during RAM
    migration
  i386/kvm: Exclude mirror vcpu in kvm_synchronize_all_tsc
  migration: Allow resetting the mirror vcpu to the MH entry point
  migration: Add QMP command start-migration-handler
  migration: Add start-migrate-incoming QMP command
  hw/isa/lpc_ich9: Allow updating an already-running VM
  docs: Add confidential guest live migration documentation

 docs/confidential-guest-live-migration.rst | 145 +++++++++
 docs/confidential-guest-support.txt        |   5 +
 docs/index.rst                             |   1 +
 qapi/migration.json                        |  38 +++
 include/sysemu/sev.h                       |   1 +
 migration/confidential-ram.h               |  23 ++
 hw/isa/lpc_ich9.c                          |   3 +-
 migration/confidential-ram.c               | 339 +++++++++++++++++++++
 migration/migration.c                      |  29 ++
 migration/ram.c                            | 133 +++++++-
 target/i386/kvm/kvm.c                      |   4 +-
 migration/meson.build                      |   2 +-
 migration/trace-events                     |   4 +
 13 files changed, 714 insertions(+), 13 deletions(-)
 create mode 100644 docs/confidential-guest-live-migration.rst
 create mode 100644 migration/confidential-ram.h
 create mode 100644 migration/confidential-ram.c

-- 
2.20.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 01/12] migration: Add helpers to save confidential RAM
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 02/12] migration: Add helpers to load " Dov Murik
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

QEMU cannot read the memory of memory-encrypted guests, which is
required for sending RAM to the migration target.  Instead, QEMU asks a
migration helper running on an auxiliary vcpu in the guest to extract
pages from memory; these pages are encrypted with a transfer key that is
known to the source and target guests, but not to both QEMUs.

The interaction with the guest migration helper is performed using two
shared (unencrypted) pages which both QEMU and guest can read from and
write to.  The details of the mailbox protocol are described in
migration/confidential-ram.c.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 migration/confidential-ram.h |  17 ++++
 migration/confidential-ram.c | 184 +++++++++++++++++++++++++++++++++++
 migration/meson.build        |   2 +-
 migration/trace-events       |   3 +
 4 files changed, 205 insertions(+), 1 deletion(-)
 create mode 100644 migration/confidential-ram.h
 create mode 100644 migration/confidential-ram.c

diff --git a/migration/confidential-ram.h b/migration/confidential-ram.h
new file mode 100644
index 0000000000..0d49718d31
--- /dev/null
+++ b/migration/confidential-ram.h
@@ -0,0 +1,17 @@
+/*
+ * QEMU migration for confidential guest's RAM
+ */
+
+#ifndef QEMU_CONFIDENTIAL_RAM_H
+#define QEMU_CONFIDENTIAL_RAM_H
+
+#include "exec/cpu-common.h"
+#include "qemu-file.h"
+
+void cgs_mh_init(void);
+void cgs_mh_cleanup(void);
+
+int cgs_mh_save_encrypted_page(QEMUFile *f, ram_addr_t src_gpa, uint32_t size,
+                               uint64_t *bytes_sent);
+
+#endif
diff --git a/migration/confidential-ram.c b/migration/confidential-ram.c
new file mode 100644
index 0000000000..65a588e7f6
--- /dev/null
+++ b/migration/confidential-ram.c
@@ -0,0 +1,184 @@
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qemu/rcu.h"
+#include "qemu/coroutine.h"
+#include "qemu/timer.h"
+#include "io/channel.h"
+#include "qapi/error.h"
+#include "exec/memory.h"
+#include "trace.h"
+#include "confidential-ram.h"
+
+enum cgs_mig_helper_cmd {
+    /* Initialize migration helper in guest */
+    CGS_MIG_HELPER_CMD_INIT = 0,
+
+    /*
+     * Fetch a page from gpa, encrypt it, and save result into the shared page
+     */
+    CGS_MIG_HELPER_CMD_ENCRYPT,
+
+    /* Read the shared page, decrypt it, and save result into gpa */
+    CGS_MIG_HELPER_CMD_DECRYPT,
+
+    /* Reset migration helper in guest */
+    CGS_MIG_HELPER_CMD_RESET,
+
+    CGS_MIG_HELPER_CMD_MAX
+};
+
+struct QEMU_PACKED CGSMigHelperCmdParams {
+    uint64_t cmd_type;
+    uint64_t gpa;
+    int32_t prefetch;
+    int32_t ret;
+    int32_t go;
+    int32_t done;
+};
+typedef struct CGSMigHelperCmdParams CGSMigHelperCmdParams;
+
+struct QEMU_PACKED CGSMigHelperPageHeader {
+    uint32_t len;
+    uint8_t data[0];
+};
+typedef struct CGSMigHelperPageHeader CGSMigHelperPageHeader;
+
+struct CGSMigHelperState {
+    CGSMigHelperCmdParams *cmd_params;
+    CGSMigHelperPageHeader *io_page_hdr;
+    uint8_t *io_page;
+    bool initialized;
+};
+typedef struct CGSMigHelperState CGSMigHelperState;
+
+static CGSMigHelperState cmhs = {0};
+
+#define MH_BUSYLOOP_TIMEOUT       100000000LL
+#define MH_REQUEST_TIMEOUT_MS     100
+#define MH_REQUEST_TIMEOUT_NS     (MH_REQUEST_TIMEOUT_MS * 1000 * 1000)
+
+/*
+ * The migration helper shared area is hard-coded at gpa 0x820000 with size of
+ * 2 pages (0x2000 bytes).  Instead of hard-coding, the address and size may be
+ * fetched from OVMF itself using a pc_system_ovmf_table_find call to query
+ * OVMF's GUIDed structure for a migration helper GUID.
+ */
+#define MH_SHARED_CMD_PARAMS_ADDR    0x820000
+#define MH_SHARED_IO_PAGE_HDR_ADDR   (MH_SHARED_CMD_PARAMS_ADDR + 0x800)
+#define MH_SHARED_IO_PAGE_ADDR       (MH_SHARED_CMD_PARAMS_ADDR + 0x1000)
+
+void cgs_mh_init(void)
+{
+    RCU_READ_LOCK_GUARD();
+    cmhs.cmd_params = qemu_map_ram_ptr(NULL, MH_SHARED_CMD_PARAMS_ADDR);
+    cmhs.io_page_hdr = qemu_map_ram_ptr(NULL, MH_SHARED_IO_PAGE_HDR_ADDR);
+    cmhs.io_page = qemu_map_ram_ptr(NULL, MH_SHARED_IO_PAGE_ADDR);
+}
+
+static int send_command_to_cgs_mig_helper(uint64_t cmd_type, uint64_t gpa)
+{
+    /*
+     * The cmd_params struct is on a page shared with the guest migration
+     * helper.  We use a volatile struct to force writes to memory so that the
+     * guest can see them.
+     */
+    volatile CGSMigHelperCmdParams *params = cmhs.cmd_params;
+    int64_t counter, request_timeout_at;
+
+    /*
+     * At this point io_page and io_page_hdr should be already filled according
+     * to the requested cmd_type.
+     */
+
+    params->cmd_type = cmd_type;
+    params->gpa = gpa;
+    params->prefetch = 0;
+    params->ret = -1;
+    params->done = 0;
+
+    /*
+     * Force writes of all command parameters before writing the 'go' flag.
+     * The guest migration handler waits for the go flag and then reads the
+     * command parameters.
+     */
+    smp_wmb();
+
+    /* Tell the migration helper to start working on this command */
+    params->go = 1;
+
+    /*
+     * Wait for the guest migration helper to process the command and mark the
+     * done flag
+     */
+    request_timeout_at = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
+                         MH_REQUEST_TIMEOUT_NS;
+    do {
+        counter = 0;
+        while (!params->done && (counter < MH_BUSYLOOP_TIMEOUT)) {
+            counter++;
+        }
+    } while (!params->done &&
+             qemu_clock_get_ns(QEMU_CLOCK_REALTIME) < request_timeout_at);
+
+    if (!params->done) {
+        error_report("Migration helper command %" PRIu64 " timed-out for "
+                     "gpa 0x%" PRIx64, cmd_type, gpa);
+        return -EIO;
+    }
+
+    return params->ret;
+}
+
+static void init_cgs_mig_helper_if_needed(void)
+{
+    int ret;
+
+    if (cmhs.initialized) {
+        return;
+    }
+
+    ret = send_command_to_cgs_mig_helper(CGS_MIG_HELPER_CMD_INIT, 0);
+    if (ret == 0) {
+        cmhs.initialized = true;
+    }
+}
+
+void cgs_mh_cleanup(void)
+{
+    send_command_to_cgs_mig_helper(CGS_MIG_HELPER_CMD_RESET, 0);
+}
+
+int cgs_mh_save_encrypted_page(QEMUFile *f, ram_addr_t src_gpa, uint32_t size,
+                               uint64_t *bytes_sent)
+{
+    int ret;
+
+    init_cgs_mig_helper_if_needed();
+
+    /* Ask the migration helper to encrypt the page at src_gpa */
+    trace_encrypted_ram_save_page(size, src_gpa);
+    ret = send_command_to_cgs_mig_helper(CGS_MIG_HELPER_CMD_ENCRYPT, src_gpa);
+    if (ret) {
+        error_report("Error cgs_mh_save_encrypted_page ret=%d", ret);
+        return -1;
+    }
+
+    /* Sanity check for response header */
+    if (cmhs.io_page_hdr->len > 1024) {
+        error_report("confidential-ram: migration helper response is too large "
+                     "(len=%u)", cmhs.io_page_hdr->len);
+        return -EINVAL;
+    }
+
+    qemu_put_be32(f, cmhs.io_page_hdr->len);
+    qemu_put_buffer(f, cmhs.io_page_hdr->data, cmhs.io_page_hdr->len);
+    *bytes_sent = 4 + cmhs.io_page_hdr->len;
+
+    qemu_put_be32(f, size);
+    qemu_put_buffer(f, cmhs.io_page, size);
+    *bytes_sent += 4 + size;
+
+    return ret;
+}
diff --git a/migration/meson.build b/migration/meson.build
index f8714dcb15..774223c1a3 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -32,4 +32,4 @@ softmmu_ss.add(when: 'CONFIG_LIVE_BLOCK_MIGRATION', if_true: files('block.c'))
 softmmu_ss.add(when: zstd, if_true: files('multifd-zstd.c'))
 
 specific_ss.add(when: 'CONFIG_SOFTMMU',
-                if_true: files('dirtyrate.c', 'ram.c', 'target.c'))
+                if_true: files('dirtyrate.c', 'ram.c', 'target.c', 'confidential-ram.c'))
diff --git a/migration/trace-events b/migration/trace-events
index a1c0f034ab..3d442a767f 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -344,3 +344,6 @@ migration_block_save_pending(uint64_t pending) "Enter save live pending  %" PRIu
 # page_cache.c
 migration_pagecache_init(int64_t max_num_items) "Setting cache buckets to %" PRId64
 migration_pagecache_insert(void) "Error allocating page"
+
+# confidential-ram.c
+encrypted_ram_save_page(uint32_t size, uint64_t gpa) "size: %u, gpa: 0x%" PRIx64
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 02/12] migration: Add helpers to load confidential RAM
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 01/12] migration: Add helpers to save confidential RAM Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 03/12] migration: Introduce gpa_inside_migration_helper_shared_area Dov Murik
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

QEMU cannot write directly to the memory of memory-encrypted guests;
this breaks normal RAM-load in the migration target.  Instead, QEMU
asks a migration helper running on an auxiliary vcpu in the guest to
restore encrypted pages as they were received from the source to a
specific GPA.

The migration helper running inside the guest can safely decrypt the
pages arrived from the source and load them into their proper location
in the guest's memory.

Loading pages uses the same shared (unencrypted) pages which both QEMU
and the guest can read from and write to.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 migration/confidential-ram.h |  2 ++
 migration/confidential-ram.c | 37 ++++++++++++++++++++++++++++++++++++
 migration/trace-events       |  1 +
 3 files changed, 40 insertions(+)

diff --git a/migration/confidential-ram.h b/migration/confidential-ram.h
index 0d49718d31..ebe4073bce 100644
--- a/migration/confidential-ram.h
+++ b/migration/confidential-ram.h
@@ -14,4 +14,6 @@ void cgs_mh_cleanup(void);
 int cgs_mh_save_encrypted_page(QEMUFile *f, ram_addr_t src_gpa, uint32_t size,
                                uint64_t *bytes_sent);
 
+int cgs_mh_load_encrypted_page(QEMUFile *f, ram_addr_t dest_gpa);
+
 #endif
diff --git a/migration/confidential-ram.c b/migration/confidential-ram.c
index 65a588e7f6..053ecea1d4 100644
--- a/migration/confidential-ram.c
+++ b/migration/confidential-ram.c
@@ -182,3 +182,40 @@ int cgs_mh_save_encrypted_page(QEMUFile *f, ram_addr_t src_gpa, uint32_t size,
 
     return ret;
 }
+
+int cgs_mh_load_encrypted_page(QEMUFile *f, ram_addr_t dest_gpa)
+{
+    int ret = 1;
+    uint32_t page_hdr_len, enc_page_len;
+
+    init_cgs_mig_helper_if_needed();
+
+    assert((dest_gpa & TARGET_PAGE_MASK) == dest_gpa);
+
+    /* Read page header */
+    page_hdr_len = qemu_get_be32(f);
+    if (page_hdr_len > 1024) {
+        error_report("confidential-ram: page header is too large (%d bytes) "
+                     "when loading gpa 0x%" PRIx64, page_hdr_len, dest_gpa);
+        return -EINVAL;
+    }
+    cmhs.io_page_hdr->len = page_hdr_len;
+    qemu_get_buffer(f, cmhs.io_page_hdr->data, page_hdr_len);
+
+    /* Read encrypted page */
+    enc_page_len = qemu_get_be32(f);
+    if (enc_page_len != TARGET_PAGE_SIZE) {
+        error_report("confidential-ram: encrypted page is too large (%d bytes) "
+                     "when loading gpa 0x%" PRIx64, enc_page_len, dest_gpa);
+        return -EINVAL;
+    }
+    qemu_get_buffer(f, cmhs.io_page, enc_page_len);
+
+    trace_encrypted_ram_load_page(page_hdr_len, enc_page_len, dest_gpa);
+    ret = send_command_to_cgs_mig_helper(CGS_MIG_HELPER_CMD_DECRYPT, dest_gpa);
+    if (ret) {
+        error_report("confidential-ram: failed loading page at gpa "
+                     "0x%" PRIx64 ": ret=%d", dest_gpa, ret);
+    }
+    return ret;
+}
diff --git a/migration/trace-events b/migration/trace-events
index 3d442a767f..5a6b5c8230 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -346,4 +346,5 @@ migration_pagecache_init(int64_t max_num_items) "Setting cache buckets to %" PRI
 migration_pagecache_insert(void) "Error allocating page"
 
 # confidential-ram.c
+encrypted_ram_load_page(uint32_t hdr_len, uint32_t trans_len, uint64_t gpa) "hdr_len: %u, trans_len: %u, gpa: 0x%" PRIx64
 encrypted_ram_save_page(uint32_t size, uint64_t gpa) "size: %u, gpa: 0x%" PRIx64
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 03/12] migration: Introduce gpa_inside_migration_helper_shared_area
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 01/12] migration: Add helpers to save confidential RAM Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 02/12] migration: Add helpers to load " Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 04/12] migration: Save confidential guest RAM using migration helper Dov Murik
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

The gpa_inside_migration_helper_shared_area will be used to skip
migrating RAM pages that are used by the migration helper at the target.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 migration/confidential-ram.h | 2 ++
 migration/confidential-ram.c | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/migration/confidential-ram.h b/migration/confidential-ram.h
index ebe4073bce..9a1027bdaf 100644
--- a/migration/confidential-ram.h
+++ b/migration/confidential-ram.h
@@ -8,6 +8,8 @@
 #include "exec/cpu-common.h"
 #include "qemu-file.h"
 
+bool gpa_inside_migration_helper_shared_area(ram_addr_t gpa);
+
 void cgs_mh_init(void);
 void cgs_mh_cleanup(void);
 
diff --git a/migration/confidential-ram.c b/migration/confidential-ram.c
index 053ecea1d4..30002448b9 100644
--- a/migration/confidential-ram.c
+++ b/migration/confidential-ram.c
@@ -68,6 +68,12 @@ static CGSMigHelperState cmhs = {0};
 #define MH_SHARED_CMD_PARAMS_ADDR    0x820000
 #define MH_SHARED_IO_PAGE_HDR_ADDR   (MH_SHARED_CMD_PARAMS_ADDR + 0x800)
 #define MH_SHARED_IO_PAGE_ADDR       (MH_SHARED_CMD_PARAMS_ADDR + 0x1000)
+#define MH_SHARED_LAST_BYTE          (MH_SHARED_CMD_PARAMS_ADDR + 0x1fff)
+
+bool gpa_inside_migration_helper_shared_area(ram_addr_t gpa)
+{
+    return gpa >= MH_SHARED_CMD_PARAMS_ADDR && gpa <= MH_SHARED_LAST_BYTE;
+}
 
 void cgs_mh_init(void)
 {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 04/12] migration: Save confidential guest RAM using migration helper
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (2 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 03/12] migration: Introduce gpa_inside_migration_helper_shared_area Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 05/12] migration: Load " Dov Murik
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

When saving RAM pages of a confidential guest, check whether a page is
encrypted.  If it is, ask the in-guest migration helper to encrypt the
page for transmission.

This patch forces the use of in-guest migration handler instead of the
PSP-based SEV migration; this is just a temporary example.  TODO
introduce migration parameter for choosing this migration mode.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 include/sysemu/sev.h |   1 +
 migration/ram.c      | 109 +++++++++++++++++++++++++++++++++++++++----
 2 files changed, 101 insertions(+), 9 deletions(-)

diff --git a/include/sysemu/sev.h b/include/sysemu/sev.h
index d04890113c..ea52d2f41f 100644
--- a/include/sysemu/sev.h
+++ b/include/sysemu/sev.h
@@ -19,6 +19,7 @@
 
 #define RAM_SAVE_ENCRYPTED_PAGE           0x1
 #define RAM_SAVE_SHARED_REGIONS_LIST      0x2
+#define RAM_SAVE_GUEST_MH_ENCRYPTED_PAGE  0x4
 
 bool sev_enabled(void);
 int sev_kvm_init(ConfidentialGuestSupport *cgs, Error **errp);
diff --git a/migration/ram.c b/migration/ram.c
index 4eca90cceb..a1f89445d4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -51,12 +51,14 @@
 #include "migration/colo.h"
 #include "block.h"
 #include "sysemu/cpu-throttle.h"
+#include "sysemu/kvm.h"
 #include "savevm.h"
 #include "qemu/iov.h"
 #include "multifd.h"
 #include "sysemu/runstate.h"
 #include "hw/boards.h"
 #include "exec/confidential-guest-support.h"
+#include "confidential-ram.h"
 
 /* Defines RAM_SAVE_ENCRYPTED_PAGE and RAM_SAVE_SHARED_REGION_LIST */
 #include "sysemu/sev.h"
@@ -97,6 +99,13 @@ bool memcrypt_enabled(void)
     return ms->cgs->ready;
 }
 
+static inline bool confidential_guest(void)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+
+    return ms->cgs;
+}
+
 XBZRLECacheStats xbzrle_counters;
 
 /* struct contains XBZRLE cache and a static page
@@ -2091,6 +2100,49 @@ static bool encrypted_test_list(RAMState *rs, RAMBlock *block,
     return ops->is_gfn_in_unshared_region(gfn);
 }
 
+/**
+ * ram_save_mh_encrypted_page - use the guest migration handler to encrypt
+ * a page and send it to the stream.
+ *
+ * Return the number of pages written (=1).
+ */
+static int ram_save_mh_encrypted_page(RAMState *rs, PageSearchStatus *pss,
+                                      bool last_stage)
+{
+    int ret;
+    uint8_t *p;
+    RAMBlock *block = pss->block;
+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
+    ram_addr_t gpa;
+    uint64_t bytes_sent;
+
+    p = block->host + offset;
+
+    /* Find the GPA of the page */
+    if (!kvm_physical_memory_addr_from_host(kvm_state, p, &gpa)) {
+        error_report("%s failed to get gpa for offset %" PRIu64 " block %s",
+                     __func__, offset, memory_region_name(block->mr));
+        return -1;
+    }
+
+    ram_counters.transferred +=
+        save_page_header(rs, rs->f, block,
+                         offset | RAM_SAVE_FLAG_ENCRYPTED_DATA);
+
+    qemu_put_be32(rs->f, RAM_SAVE_GUEST_MH_ENCRYPTED_PAGE);
+    ram_counters.transferred += sizeof(uint32_t);
+
+    ret = cgs_mh_save_encrypted_page(rs->f, gpa, TARGET_PAGE_SIZE, &bytes_sent);
+    if (ret) {
+        return -1;
+    }
+
+    ram_counters.transferred += bytes_sent;
+    ram_counters.normal++;
+
+    return 1;
+}
+
 /**
  * ram_save_target_page: save one target page
  *
@@ -2111,17 +2163,48 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         return res;
     }
 
-    /*
-     * If memory encryption is enabled then use memory encryption APIs
-     * to write the outgoing buffer to the wire. The encryption APIs
-     * will take care of accessing the guest memory and re-encrypt it
-     * for the transport purposes.
-     */
-    if (memcrypt_enabled() &&
-        encrypted_test_list(rs, pss->block, pss->page)) {
-        return ram_save_encrypted_page(rs, pss, last_stage);
+    if (confidential_guest()) {
+        /*
+         * TODO: We'd like to support two migration modes for SEV guests:
+         * PSP-based and guest-assisted.  A possible solution is to add a new
+         * migration parameter ("use_guest_assistance") that will controlwhich
+         * mode should be used.
+         */
+        bool guest_assisted_confidential_migration = true;
+
+        if (guest_assisted_confidential_migration) {
+            /*
+             * If memory encryption is enabled then skip saving the data pages
+             * used by the migration handler.
+             */
+            if (gpa_inside_migration_helper_shared_area(offset)) {
+                return 0;
+            }
+
+            /*
+             * If memory encryption is enabled then use in-guest migration
+             * helper to write the outgoing buffer to the wire. The migration
+             * helper will take care of accessing the guest memory and
+             * re-encrypt it for the transport purposes.
+             */
+            if (encrypted_test_list(rs, pss->block, pss->page)) {
+                return ram_save_mh_encrypted_page(rs, pss, last_stage);
+            }
+      } else {
+            /*
+             * If memory encryption is enabled then use memory encryption APIs
+             * to write the outgoing buffer to the wire. The encryption APIs
+             * will take care of accessing the guest memory and re-encrypt it
+             * for the transport purposes.
+             */
+            if (memcrypt_enabled() &&
+                encrypted_test_list(rs, pss->block, pss->page)) {
+                return ram_save_encrypted_page(rs, pss, last_stage);
+            }
+      }
     }
 
+
     if (save_compress_page(rs, block, offset)) {
         return 1;
     }
@@ -2959,6 +3042,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
         return -1;
     }
 
+    if (confidential_guest()) {
+        cgs_mh_init();
+    }
+
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
         if (ram_init_all(rsp) != 0) {
@@ -3167,6 +3254,10 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         }
     }
 
+    if (confidential_guest()) {
+        cgs_mh_cleanup();
+    }
+
     if (ret >= 0) {
         multifd_send_sync_main(rs->f);
         qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 05/12] migration: Load confidential guest RAM using migration helper
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (3 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 04/12] migration: Save confidential guest RAM using migration helper Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 06/12] migration: Skip ROM, non-RAM, and vga.vram memory region during RAM migration Dov Murik
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

When loading encrypted RAM pages of a confidential guest, ask the
in-guest migration helper to decrypt the incoming page and place it
correctly in the guest memory at the appropriate address.  This way the
page's plaintext content remains inaccessible to the host.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 migration/ram.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index a1f89445d4..2d5889f795 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1250,6 +1250,7 @@ static int load_encrypted_data(QEMUFile *f, uint8_t *ptr)
         cgs_class->memory_encryption_ops;
 
     int flag;
+    hwaddr gpa;
 
     flag = qemu_get_be32(f);
 
@@ -1257,6 +1258,12 @@ static int load_encrypted_data(QEMUFile *f, uint8_t *ptr)
         return ops->load_incoming_page(f, ptr);
     } else if (flag == RAM_SAVE_SHARED_REGIONS_LIST) {
         return ops->load_incoming_shared_regions_list(f);
+    } else if (flag == RAM_SAVE_GUEST_MH_ENCRYPTED_PAGE) {
+        if (!kvm_physical_memory_addr_from_host(kvm_state, ptr, &gpa)) {
+            error_report("%s: failed to get gpa for host ptr %p", __func__, ptr);
+            return -EINVAL;
+        }
+        return cgs_mh_load_encrypted_page(f, gpa);
     } else {
         error_report("unknown encrypted flag %x", flag);
         return 1;
@@ -3728,6 +3735,10 @@ void colo_release_ram_cache(void)
  */
 static int ram_load_setup(QEMUFile *f, void *opaque)
 {
+    if (confidential_guest()) {
+        cgs_mh_init();
+    }
+
     if (compress_threads_load_setup(f)) {
         return -1;
     }
@@ -3754,6 +3765,10 @@ static int ram_load_cleanup(void *opaque)
         rb->receivedmap = NULL;
     }
 
+    if (confidential_guest()) {
+        cgs_mh_cleanup();
+    }
+
     return 0;
 }
 
@@ -4024,6 +4039,7 @@ void colo_flush_ram_cache(void)
 static int ram_load_precopy(QEMUFile *f)
 {
     int flags = 0, ret = 0, invalid_flags = 0, len = 0, i = 0;
+
     /* ADVISE is earlier, it shows the source has the postcopy capability on */
     bool postcopy_advised = postcopy_is_advised();
     if (!migrate_use_compression()) {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 06/12] migration: Skip ROM, non-RAM, and vga.vram memory region during RAM migration
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (4 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 05/12] migration: Load " Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 07/12] i386/kvm: Exclude mirror vcpu in kvm_synchronize_all_tsc Dov Murik
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

Migrating these memory region hangs the in-guest migration handler.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 migration/ram.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 2d5889f795..f0df6780fb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2086,7 +2086,9 @@ static bool encrypted_test_list(RAMState *rs, RAMBlock *block,
     unsigned long gfn;
 
     /* ROM devices contains the unencrypted data */
-    if (memory_region_is_rom(block->mr)) {
+    if (memory_region_is_rom(block->mr) ||
+        memory_region_is_romd(block->mr) ||
+        !memory_region_is_ram(block->mr)) {
         return false;
     }
 
@@ -2098,6 +2100,10 @@ static bool encrypted_test_list(RAMState *rs, RAMBlock *block,
         return false;
     }
 
+    if (!strcmp(memory_region_name(block->mr), "vga.vram")) {
+        return false;
+    }
+
     /*
      * Translate page in ram_addr_t address space to GPA address
      * space using memory region.
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 07/12] i386/kvm: Exclude mirror vcpu in kvm_synchronize_all_tsc
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (5 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 06/12] migration: Skip ROM, non-RAM, and vga.vram memory region during RAM migration Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 08/12] migration: Allow resetting the mirror vcpu to the MH entry point Dov Murik
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

If we don't exclude it there's a hang when stopping the VM during
migration.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 target/i386/kvm/kvm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 6b20917fa5..04bbc89b48 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -241,7 +241,9 @@ void kvm_synchronize_all_tsc(void)
 
     if (kvm_enabled()) {
         CPU_FOREACH(cpu) {
-            run_on_cpu(cpu, do_kvm_synchronize_tsc, RUN_ON_CPU_NULL);
+            if (!cpu->mirror_vcpu) {
+                run_on_cpu(cpu, do_kvm_synchronize_tsc, RUN_ON_CPU_NULL);
+            }
         }
     }
 }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 08/12] migration: Allow resetting the mirror vcpu to the MH entry point
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (6 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 07/12] i386/kvm: Exclude mirror vcpu in kvm_synchronize_all_tsc Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 09/12] migration: Add QMP command start-migration-handler Dov Murik
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

Add a function to reset the mirror vcpu so it'll start directly at the
entry point of the migration handler.

Note: In the patch below the GDT and EIP values are hard-coded to fit
the OVMF migration handler entry point implementation we currently have.
These values can be exposed in the OVMF GUID table and can be discovered
from there instead of being hard-coded here.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 migration/confidential-ram.h |   2 +
 migration/confidential-ram.c | 112 +++++++++++++++++++++++++++++++++++
 2 files changed, 114 insertions(+)

diff --git a/migration/confidential-ram.h b/migration/confidential-ram.h
index 9a1027bdaf..af046f95cc 100644
--- a/migration/confidential-ram.h
+++ b/migration/confidential-ram.h
@@ -18,4 +18,6 @@ int cgs_mh_save_encrypted_page(QEMUFile *f, ram_addr_t src_gpa, uint32_t size,
 
 int cgs_mh_load_encrypted_page(QEMUFile *f, ram_addr_t dest_gpa);
 
+void cgs_mh_reset_mirror_vcpu(CPUState *s);
+
 #endif
diff --git a/migration/confidential-ram.c b/migration/confidential-ram.c
index 30002448b9..6e41cba878 100644
--- a/migration/confidential-ram.c
+++ b/migration/confidential-ram.c
@@ -8,6 +8,8 @@
 #include "io/channel.h"
 #include "qapi/error.h"
 #include "exec/memory.h"
+#include "sysemu/kvm.h"
+#include "kvm/kvm_i386.h"
 #include "trace.h"
 #include "confidential-ram.h"
 
@@ -225,3 +227,113 @@ int cgs_mh_load_encrypted_page(QEMUFile *f, ram_addr_t dest_gpa)
     }
     return ret;
 }
+
+void cgs_mh_reset_mirror_vcpu(CPUState *s)
+{
+    X86CPU *cpu = X86_CPU(s);
+    CPUX86State *env = &cpu->env;
+    uint64_t xcr0;
+    int i;
+
+    memset(env, 0, offsetof(CPUX86State, end_reset_fields));
+
+    env->old_exception = -1;
+
+    /* init to reset state */
+
+    env->hflags2 |= HF2_GIF_MASK;
+    env->hflags &= ~HF_GUEST_MASK;
+    env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_PE_MASK | HF_MP_MASK;
+
+    cpu_x86_update_cr0(env, 0x00010033);
+    env->a20_mask = ~0x0;
+    env->smbase = 0x30000;
+    env->msr_smi_count = 0;
+
+    /* The GDT is hard-coded to the one setup by OVMF */
+    env->gdt.base = 0x823600;
+    env->gdt.limit = 0x0047;
+    env->ldt.limit = 0xffff;
+    env->ldt.flags = DESC_P_MASK | (2 << DESC_TYPE_SHIFT);
+    env->tr.limit = 0xffff;
+    env->tr.flags = DESC_P_MASK | (11 << DESC_TYPE_SHIFT);
+
+    cpu_x86_load_seg_cache(env, R_CS, 0x38, 0, 0xffffffff,
+                           DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
+                           DESC_CS_MASK | DESC_R_MASK | DESC_A_MASK);
+    cpu_x86_load_seg_cache(env, R_DS, 0x30, 0, 0xffffffff,
+                           DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
+                           DESC_W_MASK | DESC_A_MASK);
+    cpu_x86_load_seg_cache(env, R_ES, 0x30, 0, 0xffffffff,
+                           DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
+                           DESC_W_MASK | DESC_A_MASK);
+    cpu_x86_load_seg_cache(env, R_SS, 0x30, 0, 0xffffffff,
+                           DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
+                           DESC_W_MASK | DESC_A_MASK);
+    cpu_x86_load_seg_cache(env, R_FS, 0x30, 0, 0xffffffff,
+                           DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
+                           DESC_W_MASK | DESC_A_MASK);
+    cpu_x86_load_seg_cache(env, R_GS, 0x30, 0, 0xffffffff,
+                           DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
+                           DESC_W_MASK | DESC_A_MASK);
+
+    /* The EIP is hard-coded to the OVMF migration handler entry point */
+    env->eip = 0x823000;
+    /* env->regs[R_EDX] = env->cpuid_version; */
+
+    env->eflags = 0x2;
+
+    /* FPU init */
+    for (i = 0; i < 8; i++) {
+        env->fptags[i] = 1;
+    }
+    cpu_set_fpuc(env, 0x37f);
+
+    env->mxcsr = 0x1f80;
+    /* All units are in INIT state.  */
+    env->xstate_bv = 0;
+
+    env->pat = 0x0007040600070406ULL;
+    env->msr_ia32_misc_enable = MSR_IA32_MISC_ENABLE_DEFAULT;
+    if (env->features[FEAT_1_ECX] & CPUID_EXT_MONITOR) {
+        env->msr_ia32_misc_enable |= MSR_IA32_MISC_ENABLE_MWAIT;
+    }
+
+    memset(env->dr, 0, sizeof(env->dr));
+    env->dr[6] = DR6_FIXED_1;
+    env->dr[7] = DR7_FIXED_1;
+    cpu_breakpoint_remove_all(s, BP_CPU);
+    cpu_watchpoint_remove_all(s, BP_CPU);
+
+    xcr0 = XSTATE_FP_MASK;
+    env->xcr0 = xcr0;
+    cpu_x86_update_cr4(env, 0x00000668);
+
+    /*
+     * SDM 11.11.5 requires:
+     *  - IA32_MTRR_DEF_TYPE MSR.E = 0
+     *  - IA32_MTRR_PHYSMASKn.V = 0
+     * All other bits are undefined.  For simplification, zero it all.
+     */
+    env->mtrr_deftype = 0;
+    memset(env->mtrr_var, 0, sizeof(env->mtrr_var));
+    memset(env->mtrr_fixed, 0, sizeof(env->mtrr_fixed));
+
+    env->interrupt_injected = -1;
+    env->exception_nr = -1;
+    env->exception_pending = 0;
+    env->exception_injected = 0;
+    env->exception_has_payload = false;
+    env->exception_payload = 0;
+    env->nmi_injected = false;
+#if !defined(CONFIG_USER_ONLY)
+    /* We hard-wire the BSP to the first CPU. */
+    apic_designate_bsp(cpu->apic_state, s->cpu_index == 0);
+
+    s->halted = !cpu_is_bsp(cpu);
+
+    if (kvm_enabled()) {
+        kvm_arch_reset_vcpu(cpu);
+    }
+#endif
+}
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 09/12] migration: Add QMP command start-migration-handler
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (7 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 08/12] migration: Allow resetting the mirror vcpu to the MH entry point Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 10/12] migration: Add start-migrate-incoming QMP command Dov Murik
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

The start-migration-handler QMP command starts the mirror vcpu directly
at the migration handler entry point.

This is a temporary workaround to start-up (resume) the mirror vcpu
which runs the in-guest migration handler (both on the source and the
target).

A proper solution would be to start it automatically when the 'migrate'
and 'migrate-incoming' QMP commands are executed.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 qapi/migration.json   | 12 ++++++++++++
 migration/migration.c | 12 ++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/qapi/migration.json b/qapi/migration.json
index 69c615ec4d..baff3c6bf7 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1504,6 +1504,18 @@
 ##
 { 'command': 'migrate-incoming', 'data': {'uri': 'str' } }
 
+##
+# @start-migration-handler:
+#
+# Start the mirror vcpu which runs the in-guest migration handler.
+#
+# Returns: nothing on success
+#
+# Since: 6.2
+#
+##
+{ 'command': 'start-migration-handler' }
+
 ##
 # @xen-save-devices-state:
 #
diff --git a/migration/migration.c b/migration/migration.c
index c9bc33fb10..a9f3a79e4f 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -60,6 +60,7 @@
 #include "qemu/yank.h"
 #include "sysemu/cpus.h"
 #include "yank_functions.h"
+#include "confidential-ram.h"
 
 #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
 
@@ -2161,6 +2162,17 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
     once = false;
 }
 
+void qmp_start_migration_handler(Error **errp)
+{
+    CPUState *cpu;
+    CPU_FOREACH(cpu) {
+        if (cpu->mirror_vcpu) {
+            cgs_mh_reset_mirror_vcpu(cpu);
+            cpu_resume(cpu);
+        }
+    }
+}
+
 void qmp_migrate_recover(const char *uri, Error **errp)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 10/12] migration: Add start-migrate-incoming QMP command
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (8 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 09/12] migration: Add QMP command start-migration-handler Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 11/12] hw/isa/lpc_ich9: Allow updating an already-running VM Dov Murik
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

This command forces a running VM into a migrate-incoming state.  When
using guest-assisted migration (for confidential guests), the target
must be started so that its memory has the necessary code for the
migration helper.  After it is ready we can start receiving the incoming
migration connection.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 qapi/migration.json   | 26 ++++++++++++++++++++++++++
 migration/migration.c | 17 +++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/qapi/migration.json b/qapi/migration.json
index baff3c6bf7..da47b8534f 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1516,6 +1516,32 @@
 ##
 { 'command': 'start-migration-handler' }
 
+##
+# @start-migrate-incoming:
+#
+# Force start an incoming migration even in a running VM.  This is used by the
+# target VM in guest-assisted migration of a confidential guest.
+#
+# @uri: The Uniform Resource Identifier identifying the source or
+#       address to listen on
+#
+# Returns: nothing on success
+#
+# Since: 6.0
+#
+# Notes:
+#
+# The uri format is the same as the -incoming command-line option.
+#
+# Example:
+#
+# -> { "execute": "start-migrate-incoming",
+#      "arguments": { "uri": "tcp::4446" } }
+# <- { "return": {} }
+#
+##
+{ 'command': 'start-migrate-incoming', 'data': {'uri': 'str' } }
+
 ##
 # @xen-save-devices-state:
 #
diff --git a/migration/migration.c b/migration/migration.c
index a9f3a79e4f..0b9ab3decb 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2173,6 +2173,23 @@ void qmp_start_migration_handler(Error **errp)
     }
 }
 
+void qmp_start_migrate_incoming(const char *uri, Error **errp)
+{
+    Error *local_err = NULL;
+
+    if (!yank_register_instance(MIGRATION_YANK_INSTANCE, errp)) {
+        return;
+    }
+
+    vm_stop(RUN_STATE_PAUSED);
+    qemu_start_incoming_migration(uri, &local_err);
+
+    if (local_err) {
+        yank_unregister_instance(MIGRATION_YANK_INSTANCE);
+        error_propagate(errp, local_err);
+    }
+}
+
 void qmp_migrate_recover(const char *uri, Error **errp)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 11/12] hw/isa/lpc_ich9: Allow updating an already-running VM
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (9 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 10/12] migration: Add start-migrate-incoming QMP command Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2021-08-23 14:16 ` [RFC PATCH v2 12/12] docs: Add confidential guest live migration documentation Dov Murik
  2023-09-05  9:46 ` [RFC PATCH v2 00/12] Confidential guest-assisted live migration Shameerali Kolothum Thodi via
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

The post_load function crashed when we were loading the device state in
to an already-running guest.  This was because an existing memory region
as not deleted in ich9_lpc_rcba_update.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 hw/isa/lpc_ich9.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/isa/lpc_ich9.c b/hw/isa/lpc_ich9.c
index 5f9de0239c..ea07709c14 100644
--- a/hw/isa/lpc_ich9.c
+++ b/hw/isa/lpc_ich9.c
@@ -527,9 +527,10 @@ ich9_lpc_pmcon_update(ICH9LPCState *lpc)
 static int ich9_lpc_post_load(void *opaque, int version_id)
 {
     ICH9LPCState *lpc = opaque;
+    uint32_t rcba_old = pci_get_long(lpc->d.config + ICH9_LPC_RCBA);
 
     ich9_lpc_pmbase_sci_update(lpc);
-    ich9_lpc_rcba_update(lpc, 0 /* disabled ICH9_LPC_RCBA_EN */);
+    ich9_lpc_rcba_update(lpc, rcba_old);
     ich9_lpc_pmcon_update(lpc);
     return 0;
 }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2 12/12] docs: Add confidential guest live migration documentation
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (10 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 11/12] hw/isa/lpc_ich9: Allow updating an already-running VM Dov Murik
@ 2021-08-23 14:16 ` Dov Murik
  2023-09-05  9:46 ` [RFC PATCH v2 00/12] Confidential guest-assisted live migration Shameerali Kolothum Thodi via
  12 siblings, 0 replies; 14+ messages in thread
From: Dov Murik @ 2021-08-23 14:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Dov Murik, Hubertus Franke,
	Tobin Feldman-Fitzthum, Paolo Bonzini

The new page is linked from the main index, otherwise sphinx complains
that "document isn't included in any toctree"; I assume there would be a
better place for it in the documentation tree.

Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
---
 docs/confidential-guest-live-migration.rst | 145 +++++++++++++++++++++
 docs/confidential-guest-support.txt        |   5 +
 docs/index.rst                             |   1 +
 3 files changed, 151 insertions(+)
 create mode 100644 docs/confidential-guest-live-migration.rst

diff --git a/docs/confidential-guest-live-migration.rst b/docs/confidential-guest-live-migration.rst
new file mode 100644
index 0000000000..65b6111ff1
--- /dev/null
+++ b/docs/confidential-guest-live-migration.rst
@@ -0,0 +1,145 @@
+=================================
+Confidential Guest Live Migration
+=================================
+
+When migrating regular QEMU guests, QEMU reads the guest's RAM and sends it
+over to the migration target host, where QEMU there writes it into the target
+guest's RAM and starts the VM.  This mechanism doesn't work when the guest
+memory is encrypted or QEMU is prevented from reading it in another way.
+
+In order to support live migration in such scenarios, QEMU relies on an
+in-guest migration helper which can securely extract RAM content from the
+guest in order to send it to the target.  The migration helper is implemented as
+part of the VM's firmware in OVMF.
+
+
+Migration flow
+==============
+
+Source VM
+---------
+
+The source VM is started with an extra mirror vcpu which is not part of the
+main VM but shared the same memory mapping.  This vcpu is started at a special
+entry point which runs a dedicated migration helper; the migration helper
+simply waits for commands from QEMU.  When migration starts using the
+``migrate`` command, QEMU starts saving the state of the different devices.
+When it reaches saving RAM pages, it'll check for each page whether it is
+encrypted or not; for encrypted pages, it'll send a command to the migration
+helper to extract the given page.  The migration helper receives this command,
+reads the page content, encrypts it with a transport key, and returns the
+newly-encrypted page to QEMU.  QEMU saves those pages to the outgoing migration
+stream using the ``RAM_SAVE_GUEST_MH_ENCRYPTED_PAGE`` subtype of the
+``RAM_SAVE_FLAG_ENCRYPTED_DATA`` page flag.
+
+When QEMU reaches the last stage of RAM migration, it stops the source VM to
+avoid dirtying the last pages of RAM.  However, the mirror vcpu must be kept
+running so the migration helper can still extract pages from the guest memory.
+
+Target VM
+---------
+
+Usually QEMU migration target VMs are started with the ``-incoming``
+command-line option which starts the VM paused.  However, in order to migrate
+confidential guests we must have the migration helper running inside the guest;
+in such a case, we start the target with a special ``-fw_cfg`` value that tells
+OVMF to load the migration handler code into memory and then enter a CPU dead
+loop.  After this short "boot" completes, QEMU can switch to the "migration
+incoming" mode; we do that with the new ``start-migrate-incoming`` QMP command
+that makes the target VM listen for incoming migration connections.
+
+QEMU will load the state of VM devices as it arrives from the incoming
+migration stream.  When it encounters a RAM page with the
+``RAM_SAVE_FLAG_ENCRYPTED_DATA`` flag and the
+``RAM_SAVE_GUEST_MH_ENCRYPTED_PAGE`` subtype, it will send its
+transport-encrypted content and guest physical address to the migration helper.
+The migration helper running inside the guest will decrypt the page using the
+transport key and place the content in memory (again, that memory page is not
+accessible to host due to the confidential guest properties; for example, in SEV
+it is hardware-encrypted with a VM-specific key).
+
+
+Usage
+=====
+
+In order to start the source and target VMs with mirror vCPUs, the
+``mirrorvcpus=`` option must be passed to ``-smp`` . For example::
+
+    # ${QEMU} -smp 5,mirrorvcpus=1 ...
+
+This command starts a VM with 5 vcpus of which 4 are main vcpus (available for
+the guest OS) and 1 is mirror vcpu.
+
+Moreover, in both the source and target we need to instruct OVMF to start the
+migration helper running in the auxiliary vcpu.  This is achieved using the
+following command-line option::
+
+    # ${QEMU} -fw_cfg name=opt/ovmf/PcdSevIsMigrationHelper,string=0 ...
+
+In the target VM we need to add another ``-fw_cfg`` entry to instruct OVMF to
+start only the migration helepr, which will wait for incoming pages (the target
+cannot be started with ``-incoming`` because that option completely pauses the
+VM, not allowing the migration helper to run). Because the migration helper must
+be running when the incoming RAM pages are received, starting the target VM with
+the ``-incoming`` option doesn't work (with that option, the VM doesn't start
+executing).  Instead, start the target VM without ``-incoming`` but with the
+following option::
+
+    # ${QEMU} -fw_cfg name=opt/ovmf/PcdSevIsMigrationTarget,string=1 ...
+
+After the VM boots into the migration helper, we instruct QEMU to listen for
+incoming migration connections by sending the following QMP command::
+
+    { "execute": "start-migrate-incoming",
+      "arguments": { "uri": "tcp:0.0.0.0:6666" } }
+
+Now that the target is ready, we instruct the source VM to start migrating its
+state using the regular ``migrate`` QMP command, supplying the target VMs
+listening address::
+
+    { "execute": "migrate",
+      "arguments": { "uri": "tcp:192.168.111.222:6666" } }
+
+
+Implementation details
+======================
+
+Migration helper <-> QEMU communication
+---------------------------------------
+
+The migration helper is running inside the guest (implemented as part of OVMF).
+QEMU communicates with it using a mailbox protocol over two shared (unencrypted)
+4K RAM pages.
+
+The first page contains a ``SevMigHelperCmdParams`` struct at offset 0x0
+(``cmd_params``) and a ``MigrationHelperHeader`` struct at offset 0x800
+(``io_hdr``).  The second page (``io_page``) is dedicated for encrypted page
+content.
+
+In order to save a confidential RAM page, QEMU will fill the ``cmd_params``
+struct to indicate the SEV_MIG_HELPER_CMD_ENCRYPT command and the requested gpa
+(guest physical address), and then set the ``go`` field to 1.  Meanwhile the
+migration helper waits for the ``go`` field to become non-zero; after it notices
+``go`` is 1 it'll read the gpa, read the content of the relevant page from the
+guest's memory, encrypt it with the transport key, and store the
+transport-encrypted page in the the ``io_page``.  Additional envelope data like
+encryption IV and other fields are stored in ``io_hdr``.  After the migration is
+done writing to ``io_page`` and ``io_hdr``, it sets the ``done`` field to 1.  At
+this point QEMU notices that the migration helper is done and can continue its
+part, which is saving the header and page to the outgoing migration stream.
+
+Similar process is used when loading a confidential RAM from the incoming
+migration stream.  QEMU reads the header and the encrypted page from the stream,
+and copies them into the shared areas ``io_hdr`` and ``io_page`` respectably.
+It then fills the ``cmd_params`` struct to indicate the
+SEV_MIG_HELPER_CMD_DECRYPT command and the gpa, and sets ``go`` to 1.  The
+migration helper will notice the command, will decrypt the page using the
+transport key and will place the decrypted content in the requetsed gpa, and set
+``done`` to 1 to allow QEMU to continue processing the next item in the incoming
+migration stream.
+
+Shared pages address discovery
+------------------------------
+In the current implementation the address of the two shared pages is hard-coded
+in both OVMF and QEMU.  We plan for OVMF to expose this address via its GUIDed
+table and let QEMU discover it using ``pc_system_ovmf_table_find()``.
diff --git a/docs/confidential-guest-support.txt b/docs/confidential-guest-support.txt
index 71d07ba57a..bed1601fbb 100644
--- a/docs/confidential-guest-support.txt
+++ b/docs/confidential-guest-support.txt
@@ -47,3 +47,8 @@ s390x Protected Virtualization (PV)
     docs/system/s390x/protvirt.rst
 
 Other mechanisms may be supported in future.
+
+Live migration support
+----------------------
+Details regarding confidential guest live migration are in:
+    docs/confidential-guest-live-migration.rst
diff --git a/docs/index.rst b/docs/index.rst
index 5f7eaaa632..f2015de814 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -17,3 +17,4 @@ Welcome to QEMU's documentation!
    interop/index
    specs/index
    devel/index
+   confidential-guest-live-migration
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* RE: [RFC PATCH v2 00/12] Confidential guest-assisted live migration
  2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
                   ` (11 preceding siblings ...)
  2021-08-23 14:16 ` [RFC PATCH v2 12/12] docs: Add confidential guest live migration documentation Dov Murik
@ 2023-09-05  9:46 ` Shameerali Kolothum Thodi via
  12 siblings, 0 replies; 14+ messages in thread
From: Shameerali Kolothum Thodi via @ 2023-09-05  9:46 UTC (permalink / raw)
  To: Dov Murik, qemu-devel
  Cc: Tom Lendacky, Ashish Kalra, Brijesh Singh, Michael S. Tsirkin,
	Steve Rutherford, James Bottomley, Juan Quintela,
	Dr. David Alan Gilbert, Hubertus Franke, Tobin Feldman-Fitzthum,
	Paolo Bonzini


> -----Original Message-----
> From: Qemu-devel
> [mailto:qemu-devel-bounces+shameerali.kolothum.thodi=huawei.com@nong
> nu.org] On Behalf Of Dov Murik
> Sent: 23 August 2021 15:16
> To: qemu-devel@nongnu.org
> Cc: Tom Lendacky <thomas.lendacky@amd.com>; Ashish Kalra
> <ashish.kalra@amd.com>; Brijesh Singh <brijesh.singh@amd.com>; Michael
> S. Tsirkin <mst@redhat.com>; Steve Rutherford <srutherford@google.com>;
> James Bottomley <jejb@linux.ibm.com>; Juan Quintela
> <quintela@redhat.com>; Dr. David Alan Gilbert <dgilbert@redhat.com>; Dov
> Murik <dovmurik@linux.ibm.com>; Hubertus Franke <frankeh@us.ibm.com>;
> Tobin Feldman-Fitzthum <tobin@linux.ibm.com>; Paolo Bonzini
> <pbonzini@redhat.com>
> Subject: [RFC PATCH v2 00/12] Confidential guest-assisted live migration
> 
> This is an RFC series for fast migration of confidential guests using an
> in-guest migration helper that lives in OVMF.  QEMU VM live migration
> needs to read source VM's RAM and write it in the target VM; this
> mechanism doesn't work when the guest memory is encrypted or QEMU is
> prevented from reading it in another way.  In order to support live
> migration in such scenarios, we introduce an in-guest migration helper
> which can securely extract RAM content from the guest in order to send
> it to the target.  The migration helper is implemented as part of the
> VM's firmware in OVMF.
> 
> We've implemented and tested this on AMD SEV, but expect most of the
> processes can be used with other technologies that prevent direct access
> of hypervisor to the guest's memory.  Specifically, we don't use SEV's
> PSP migration commands (SEV_SEND_START, SEV_RECEIVE_START, etc) at all;
> but note that the mirror VM relies on
> KVM_CAP_VM_COPY_ENC_CONTEXT_FROM
> to shared the SEV ASID with the main VM.

Hi Dov,

Sorry if I missed out, but just to check if there are any updates to or revised
one to this series? This guest-assisted method seems to be a good generic
approach for live migration and just wondering whether it is worth taking a
look for ARM CCA as well(I am not sure ARM RMM spec will have any 
specific proposal for live migration or not, but couldn't find anything
public yet).

Please let me know if you plan to re-spin or there are any concerns with
this approach. Appreciate if you can point me to any relevant discussion
threads.

Thanks,
Shameer

> 
> Corresponding RFC patches for OVMF have been posted by Tobin
> Feldman-Fitzthum on edk2-devel [1].  Those include the crux of the
> migration helper: a mailbox protocol over a shared memory page which
> allows communication between QEMU and the migration helper.  In the
> source VM this is used to read a page and encrypt it for transport; in
> the target it is used to decrypt the incoming page and storing the
> content in the correct address in the guest memory.  All encryption and
> decryption operations occur inside the trusted context in the VM, and
> therefore the VM's memory plaintext content is never accessible to the
> hosts participating in the migration.
> 
> In order to allow OVMF to run the migration helper in parallel to the
> guest OS, we use a mirror VM [3], which shares the same memory mapping
> and SEV ASID as the main VM but has its own run loop.  To start the
> mirror vcpu and the migration handler, we added a temporary
> start-migration-handler QMP command; this will be removed in a future
> version to run as part of the migrate QMP command.
> 
> In the target VM we need the migration handler running to receive
> incoming RAM pages; to achieve that, we boot the VM into OVMF with a
> special fw_cfg value that causes OVMF to not boot the guest OS; we then
> allow QEMU to receive an incoming migration by issuing a new
> start-migrate-incoming QMP command.
> 
> The confidential RAM migration requires checking whether a given guest
> RAM page is encrypted or not.  This is achieved using SEV shared regions
> list tracking, which is implemented as part the SEV live migration patch
> series [2].  This feature tracks hypercalls from OVMF and guest Linux to
> report changes of page encryption status so that QEMU has an up-to-date
> view of which memory regions are shared and which are encrypted.
> 
> We left a few unfinished edges in this RFC but decided to publish it to
> start the commmunity discussion.  TODOs:
> 
> 1. QMP commands start-migration-handler and start-migrate-incoming are
>    developer tools and should be performed automatically.
> 2. The entry point address of the in-guest migration handler and its GDT
>    are currently hard-coded in QEMU (patch 8); instead they should be
>    discovered using pc_system_ovmf_table_find.  Same applies for the
>    mailbox address (patch 1).
> 3. For simplicity, this patch series forces the use of the
>    guest-assisted migration instead of the SEV PSP-based migration.
>    Ideally we might want the user to choose the desired mode using
>    migrate-set-parameters or a similar mechanism.
> 4. There is currently no discovery protocol between QEMU and OVMF to
>    verify that OVMF indeed supports in-guest migration handler.
> 
> 
> List of patches in this series:
> 
> 1-3: introduce new confidtial RAM migration functions which communicate
>      with the migration helper.
> 4-6: use the new MH communication functions when migrating encrypted
> RAM
>      pages
> 7-9: allow starting migration handler on mirror vcpu with QMP command
>      start-migration-handler
> 10:  introduce the start-migrate-incoming QMP command to switch the
>      target into accepting the incoming migration.
> 11:  fix devices issues when loading state into a live VM
> 12:  add documentation
> 
> 
> This patch series is based on top of:
> 
> 1. Add SEV guest live migration support, from Ashish Kalra [2]
> 2. Support for mirror VM, from Ashish Kalra [3]
> 
> [1] https://edk2.groups.io/g/devel/message/79517
> [2]
> https://lore.kernel.org/qemu-devel/cover.1628076205.git.ashish.kalra@amd
> .com/
> [3]
> https://lore.kernel.org/qemu-devel/cover.1629118207.git.ashish.kalra@amd
> .com/
> 
> 
> Changes from RFC v1:
>  - Use the an SEV mirror VM for the migation handler (instead of
>    auxilliary vcpus)
> 
> RFC v1:
> https://lore.kernel.org/qemu-devel/20210302204822.81901-1-dovmurik@li
> nux.vnet.ibm.com/
> 
> 
> Dov Murik (12):
>   migration: Add helpers to save confidential RAM
>   migration: Add helpers to load confidential RAM
>   migration: Introduce gpa_inside_migration_helper_shared_area
>   migration: Save confidential guest RAM using migration helper
>   migration: Load confidential guest RAM using migration helper
>   migration: Skip ROM, non-RAM, and vga.vram memory region during RAM
>     migration
>   i386/kvm: Exclude mirror vcpu in kvm_synchronize_all_tsc
>   migration: Allow resetting the mirror vcpu to the MH entry point
>   migration: Add QMP command start-migration-handler
>   migration: Add start-migrate-incoming QMP command
>   hw/isa/lpc_ich9: Allow updating an already-running VM
>   docs: Add confidential guest live migration documentation
> 
>  docs/confidential-guest-live-migration.rst | 145 +++++++++
>  docs/confidential-guest-support.txt        |   5 +
>  docs/index.rst                             |   1 +
>  qapi/migration.json                        |  38 +++
>  include/sysemu/sev.h                       |   1 +
>  migration/confidential-ram.h               |  23 ++
>  hw/isa/lpc_ich9.c                          |   3 +-
>  migration/confidential-ram.c               | 339
> +++++++++++++++++++++
>  migration/migration.c                      |  29 ++
>  migration/ram.c                            | 133 +++++++-
>  target/i386/kvm/kvm.c                      |   4 +-
>  migration/meson.build                      |   2 +-
>  migration/trace-events                     |   4 +
>  13 files changed, 714 insertions(+), 13 deletions(-)
>  create mode 100644 docs/confidential-guest-live-migration.rst
>  create mode 100644 migration/confidential-ram.h
>  create mode 100644 migration/confidential-ram.c
> 
> --
> 2.20.1
> 



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-09-05  9:46 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-23 14:16 [RFC PATCH v2 00/12] Confidential guest-assisted live migration Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 01/12] migration: Add helpers to save confidential RAM Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 02/12] migration: Add helpers to load " Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 03/12] migration: Introduce gpa_inside_migration_helper_shared_area Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 04/12] migration: Save confidential guest RAM using migration helper Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 05/12] migration: Load " Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 06/12] migration: Skip ROM, non-RAM, and vga.vram memory region during RAM migration Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 07/12] i386/kvm: Exclude mirror vcpu in kvm_synchronize_all_tsc Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 08/12] migration: Allow resetting the mirror vcpu to the MH entry point Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 09/12] migration: Add QMP command start-migration-handler Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 10/12] migration: Add start-migrate-incoming QMP command Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 11/12] hw/isa/lpc_ich9: Allow updating an already-running VM Dov Murik
2021-08-23 14:16 ` [RFC PATCH v2 12/12] docs: Add confidential guest live migration documentation Dov Murik
2023-09-05  9:46 ` [RFC PATCH v2 00/12] Confidential guest-assisted live migration Shameerali Kolothum Thodi via

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.