qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support
@ 2019-06-20 18:03 Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 02/12] kvm: introduce high-level API to support encrypted guest migration Singh, Brijesh
                   ` (11 more replies)
  0 siblings, 12 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

AMD SEV encrypts the memory of VMs and because this encryption is done using
an address tweak, the hypervisor will not be able to simply copy ciphertext
between machines to migrate a VM. Instead the AMD SEV Key Management API
provides a set of functions which the hypervisor can use to package a
guest encrypted pages for migration, while maintaining the confidentiality
provided by AMD SEV.

The patch series add the support required in Qemu to perform the SEV
guest live migration. Before initiating the live migration a user
should use newly added 'migrate-set-sev-info' command to pass the
target machines certificate chain. See the docs/amd-memory-encryption.txt
for further details.

The patch series depends on kernel patches available here:
https://marc.info/?l=kvm&m=156104873409876&w=2

The complete tree with patch is available at:
https://github.com/codomania/qemu/tree/sev-migration-rfc-v1

Brijesh Singh (12):
  linux-headers: update kernel header to include SEV migration commands
  kvm: introduce high-level API to support encrypted guest migration
  migration/ram: add support to send encrypted pages
  kvm: add support to sync the page encryption state bitmap
  doc: update AMD SEV API spec web link
  doc: update AMD SEV to include Live migration flow
  target/i386: sev: do not create launch context for an incoming guest
  target.json: add migrate-set-sev-info command
  target/i386: sev: add support to encrypt the outgoing page
  target/i386: sev: add support to load incoming encrypted page
  migration: add support to migrate page encryption bitmap
  target/i386: sev: remove migration blocker

 accel/kvm/kvm-all.c            |  75 ++++++
 accel/kvm/sev-stub.c           |  28 ++
 accel/stubs/kvm-stub.c         |  30 +++
 docs/amd-memory-encryption.txt |  46 +++-
 include/exec/ram_addr.h        |   2 +
 include/sysemu/kvm.h           |  33 +++
 include/sysemu/sev.h           |   9 +
 linux-headers/linux/kvm.h      |  53 ++++
 migration/ram.c                | 121 ++++++++-
 qapi/target.json               |  18 ++
 target/i386/monitor.c          |  10 +
 target/i386/sev-stub.c         |   5 +
 target/i386/sev.c              | 471 +++++++++++++++++++++++++++++++--
 target/i386/sev_i386.h         |  11 +-
 target/i386/trace-events       |   9 +
 15 files changed, 902 insertions(+), 19 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 02/12] kvm: introduce high-level API to support encrypted guest migration
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 01/12] linux-headers: update kernel header to include SEV migration commands Singh, Brijesh
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

When memory encryption is enabled in VM, the guest pages will be
encrypted with the guest-specific key, to protect the confidentiality
of data in transit. To support the live migration we need to use
platform specific hooks to access the guest memory.

The kvm_memcrypt_save_outgoing_page() can be used by the sender to write
the encrypted pages and metadata associated with it on the socket.

The kvm_memcrypt_load_incoming_page() can be used by receiver to read the
incoming encrypted pages from the socket and load into the guest memory.

Encrypted VMs have concept of private and shared memory. The private
memory is encrypted with the guest-specific key, while shared memory
may be encrypted with hyperivosr key. The KVM_{SET,GET}_PAGE_ENC_BITMAP
ioctl can be used to get/set the bitmap from/to the hypervisor.

The kvm_memcrypt_sync_page_enc_bitmap() can be used by the sender to get
the page encryption bitmap. The bitmap is used to determine the page state
(private or shared).

The kvm_memcrypt_send_outgoing_page_enc_bitmap() can be used by the sender
to write the page encryption bitmap on the socket.

The kvm_memcrypt_load_incoming_page_enc_bitmap() can be used by the
receiver to read the page encryption bitmap from the socket.

Signed-off-by: Brijesh Singh <<brijesh.singh@amd.com>>
---
 accel/kvm/kvm-all.c    | 68 ++++++++++++++++++++++++++++++++++++++++++
 accel/kvm/sev-stub.c   | 28 +++++++++++++++++
 accel/stubs/kvm-stub.c | 30 +++++++++++++++++++
 include/sysemu/kvm.h   | 33 ++++++++++++++++++++
 include/sysemu/sev.h   |  9 ++++++
 5 files changed, 168 insertions(+)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index b0c4bed6e3..4d5ff8b9f5 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -109,6 +109,15 @@ struct KVMState
     /* memory encryption */
     void *memcrypt_handle;
     int (*memcrypt_encrypt_data)(void *handle, uint8_t *ptr, uint64_t len);
+    int (*memcrypt_save_outgoing_page)(void *ehandle, QEMUFile *f,
+            uint8_t *ptr, uint32_t sz, uint64_t *bytes_sent);
+    int (*memcrypt_load_incoming_page)(void *ehandle, QEMUFile *f,
+            uint8_t *ptr);
+    int (*memcrypt_load_incoming_page_enc_bitmap)(void *ehandle, QEMUFile *f);
+    int (*memcrypt_save_outgoing_page_enc_bitmap)(void *ehandle, QEMUFile *f,
+            uint8_t *host, uint64_t length, unsigned long *bmap);
+    int (*memcrypt_sync_page_enc_bitmap)(void *ehandle, uint8_t *host,
+            uint64_t length, unsigned long *bmap);
 };
 
 KVMState *kvm_state;
@@ -164,6 +173,65 @@ int kvm_memcrypt_encrypt_data(uint8_t *ptr, uint64_t len)
     return 1;
 }
 
+int kvm_memcrypt_save_outgoing_page(QEMUFile *f, uint8_t *ptr,
+                                    uint32_t size, uint64_t *bytes_sent)
+{
+    if (kvm_state->memcrypt_handle &&
+        kvm_state->memcrypt_save_outgoing_page) {
+        return kvm_state->memcrypt_save_outgoing_page(kvm_state->memcrypt_handle,
+                    f, ptr, size, bytes_sent);
+    }
+
+    return 1;
+}
+
+int kvm_memcrypt_load_incoming_page(QEMUFile *f, uint8_t *ptr)
+{
+    if (kvm_state->memcrypt_handle &&
+        kvm_state->memcrypt_load_incoming_page) {
+        return kvm_state->memcrypt_load_incoming_page(kvm_state->memcrypt_handle,
+                    f, ptr);
+    }
+
+    return 1;
+}
+
+int kvm_memcrypt_load_incoming_page_enc_bitmap(QEMUFile *f)
+{
+    if (kvm_state->memcrypt_handle &&
+        kvm_state->memcrypt_load_incoming_page_enc_bitmap) {
+        return kvm_state->memcrypt_load_incoming_page_enc_bitmap(
+                kvm_state->memcrypt_handle, f);
+    }
+
+    return 1;
+}
+
+int kvm_memcrypt_save_outgoing_page_enc_bitmap(QEMUFile *f, uint8_t *host,
+                                               uint64_t length,
+                                               unsigned long *bmap)
+{
+    if (kvm_state->memcrypt_handle &&
+        kvm_state->memcrypt_save_outgoing_page_enc_bitmap) {
+        return kvm_state->memcrypt_save_outgoing_page_enc_bitmap(
+                kvm_state->memcrypt_handle, f, host, length, bmap);
+    }
+
+    return 1;
+}
+
+int kvm_memcrypt_sync_page_enc_bitmap(uint8_t *host, uint64_t length,
+                                      unsigned long *bmap)
+{
+    if (kvm_state->memcrypt_handle &&
+        kvm_state->memcrypt_sync_page_enc_bitmap) {
+        return kvm_state->memcrypt_sync_page_enc_bitmap(
+                kvm_state->memcrypt_handle, host, length, bmap);
+    }
+
+    return 1;
+}
+
 static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
 {
     KVMState *s = kvm_state;
diff --git a/accel/kvm/sev-stub.c b/accel/kvm/sev-stub.c
index 4f97452585..5d8c3f2ecd 100644
--- a/accel/kvm/sev-stub.c
+++ b/accel/kvm/sev-stub.c
@@ -24,3 +24,31 @@ void *sev_guest_init(const char *id)
 {
     return NULL;
 }
+
+int sev_save_outgoing_page(void *handle, QEMUFile *f, uint8_t *ptr,
+                           uint32_t size, uint64_t *bytes_sent)
+{
+    return 1;
+}
+
+int sev_load_incoming_page(void *handle, QEMUFile *f, uint8_t *ptr)
+{
+    return 1;
+}
+
+int sev_load_incoming_page_enc_bitmap(void *handle, QEMUFile *f)
+{
+    return 1;
+}
+
+int sev_save_outgoing_page_enc_bitmap(void *handle, QEMUFile *f,
+                                      unsigned long *bmap)
+{
+    return 1;
+}
+
+int sev_sync_page_enc_bitmap(void *handle, uint8_t *host, uint64_t size,
+                             unsigned long *bitmap)
+{
+    return 1;
+}
diff --git a/accel/stubs/kvm-stub.c b/accel/stubs/kvm-stub.c
index 6feb66ed80..bef7376985 100644
--- a/accel/stubs/kvm-stub.c
+++ b/accel/stubs/kvm-stub.c
@@ -114,6 +114,36 @@ int kvm_memcrypt_encrypt_data(uint8_t *ptr, uint64_t len)
   return 1;
 }
 
+int kvm_memcrypt_save_outgoing_page(QEMUFile *f, uint8_t *ptr,
+                                    uint32_t size, uint64_t *bytes_sent)
+{
+    return 1;
+}
+
+int kvm_memcrypt_load_incoming_page(QEMUFile *f, uint8_t *ptr)
+{
+    return 1;
+}
+
+int kvm_memcrypt_load_incoming_page_enc_bitmap(QEMUFile *f)
+{
+    return 1;
+}
+
+int kvm_memcrypt_save_outgoing_page_enc_bitmap(QEMUFile *f, uint8_t *host,
+                                               uint64_t length,
+                                               unsigned long *bmap)
+{
+    return 1;
+}
+
+int kvm_memcrypt_sync_page_enc_bitmap(uint8_t *host, uint64_t size,
+                                      unsigned long *bitmap)
+{
+    return 1;
+}
+
+
 #ifndef CONFIG_USER_ONLY
 int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
 {
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index a6d1cd190f..f85a60e411 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -246,6 +246,39 @@ bool kvm_memcrypt_enabled(void);
  */
 int kvm_memcrypt_encrypt_data(uint8_t *ptr, uint64_t len);
 
+/**
+ * kvm_memcrypt_save_outgoing_buffer - encrypt the outgoing buffer
+ * and write to the wire.
+ */
+int kvm_memcrypt_save_outgoing_page(QEMUFile *f, uint8_t *ptr, uint32_t size,
+                                    uint64_t *bytes_sent);
+
+/**
+ * kvm_memcrypt_load_incoming_buffer - read the encrypt incoming buffer and copy
+ * the buffer into the guest memory space.
+ */
+int kvm_memcrypt_load_incoming_page(QEMUFile *f, uint8_t *ptr);
+
+/**
+ * kvm_memcrypt_load_incoming_page_enc_bitmap: read the page encryption bitmap
+ * from the socket and pass it to the hypervisor.
+ */
+int kvm_memcrypt_load_incoming_page_enc_bitmap(QEMUFile *f);
+
+/**
+ * kvm_memcrypt_sync_page_enc_bitmap: sync the page encryption bitmap
+ * The caller is responsible to allocate/free the bitmap.
+ */
+int kvm_memcrypt_sync_page_enc_bitmap(uint8_t *host, uint64_t size,
+                                      unsigned long *bitmap);
+
+/**
+ * kvm_memcrypt_save_outgoing_page_enc_bitmap: write the page encryption bitmap
+ * on socket.
+ */
+int kvm_memcrypt_save_outgoing_page_enc_bitmap(QEMUFile *f, uint8_t *host,
+                                               uint64_t length,
+                                               unsigned long *bmap);
 
 #ifdef NEED_CPU_H
 #include "cpu.h"
diff --git a/include/sysemu/sev.h b/include/sysemu/sev.h
index 98c1ec8d38..009be45230 100644
--- a/include/sysemu/sev.h
+++ b/include/sysemu/sev.h
@@ -18,4 +18,13 @@
 
 void *sev_guest_init(const char *id);
 int sev_encrypt_data(void *handle, uint8_t *ptr, uint64_t len);
+int sev_save_outgoing_page(void *handle, QEMUFile *f, uint8_t *ptr,
+                           uint32_t size, uint64_t *bytes_sent);
+int sev_load_incoming_page(void *handle, QEMUFile *f, uint8_t *ptr);
+int sev_load_incoming_page_enc_bitmap(void *handle, QEMUFile *f);
+int sev_save_outgoing_page_enc_bitmap(void *handle, QEMUFile *f,
+                                      uint8_t *host, uint64_t length,
+                                      unsigned long *bmap);
+int sev_sync_page_enc_bitmap(void *handle, uint8_t *host, uint64_t size,
+                             unsigned long *bitmap);
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 01/12] linux-headers: update kernel header to include SEV migration commands
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 02/12] kvm: introduce high-level API to support encrypted guest migration Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 03/12] migration/ram: add support to send encrypted pages Singh, Brijesh
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 linux-headers/linux/kvm.h | 53 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
index c8423e760c..2bdd6a908e 100644
--- a/linux-headers/linux/kvm.h
+++ b/linux-headers/linux/kvm.h
@@ -492,6 +492,16 @@ struct kvm_dirty_log {
 	};
 };
 
+/* for KVM_GET_PAGE_ENC_BITMAP */
+struct kvm_page_enc_bitmap {
+        __u64 start;
+        __u64 num_pages;
+	union {
+		void *enc_bitmap; /* one bit per page */
+		__u64 padding2;
+	};
+};
+
 /* for KVM_CLEAR_DIRTY_LOG */
 struct kvm_clear_dirty_log {
 	__u32 slot;
@@ -1451,6 +1461,9 @@ struct kvm_enc_region {
 /* Available with KVM_CAP_ARM_SVE */
 #define KVM_ARM_VCPU_FINALIZE	  _IOW(KVMIO,  0xc2, int)
 
+#define KVM_GET_PAGE_ENC_BITMAP  	 _IOW(KVMIO, 0xc2, struct kvm_page_enc_bitmap)
+#define KVM_SET_PAGE_ENC_BITMAP  	 _IOW(KVMIO, 0xc3, struct kvm_page_enc_bitmap)
+
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
 	/* Guest initialization commands */
@@ -1531,6 +1544,46 @@ struct kvm_sev_dbg {
 	__u32 len;
 };
 
+struct kvm_sev_send_start {
+	__u32 policy;
+	__u64 pdh_cert_uaddr;
+	__u32 pdh_cert_len;
+	__u64 plat_cert_uaddr;
+	__u32 plat_cert_len;
+	__u64 amd_cert_uaddr;
+	__u32 amd_cert_len;
+	__u64 session_uaddr;
+	__u32 session_len;
+};
+
+struct kvm_sev_send_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 policy;
+	__u64 pdh_uaddr;
+	__u32 pdh_len;
+	__u64 session_uaddr;
+	__u32 session_len;
+};
+
+struct kvm_sev_receive_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+};
+
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 03/12] migration/ram: add support to send encrypted pages
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 02/12] kvm: introduce high-level API to support encrypted guest migration Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 01/12] linux-headers: update kernel header to include SEV migration commands Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 04/12] kvm: add support to sync the page encryption state bitmap Singh, Brijesh
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

When memory encryption is enabled, the guest memory will be encrypted with
the guest specific key. The patch introduces RAM_SAVE_FLAG_ENCRYPTED_PAGE
flag to distinguish the encrypted data from plaintext. Encrypted pages
may need special handling. The kvm_memcrypt_save_outgoing_page() is used
by the sender to write the encrypted pages onto the socket, similarly the
kvm_memcrypt_load_incoming_page() is used by the target to read the
encrypted pages from the socket and load into the guest memory.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 migration/ram.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 53 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 908517fc2b..3c8977d508 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -57,6 +57,7 @@
 #include "qemu/uuid.h"
 #include "savevm.h"
 #include "qemu/iov.h"
+#include "sysemu/kvm.h"
 
 /***********************************************************/
 /* ram save/restore */
@@ -76,6 +77,7 @@
 #define RAM_SAVE_FLAG_XBZRLE   0x40
 /* 0x80 is reserved in migration.h start with 0x100 next */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
+#define RAM_SAVE_FLAG_ENCRYPTED_PAGE   0x200
 
 static inline bool is_zero_range(uint8_t *p, uint64_t size)
 {
@@ -460,6 +462,9 @@ static QemuCond decomp_done_cond;
 static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
                                  ram_addr_t offset, uint8_t *source_buf);
 
+static int ram_save_encrypted_page(RAMState *rs, PageSearchStatus *pss,
+                                   bool last_stage);
+
 static void *do_data_compress(void *opaque)
 {
     CompressParam *param = opaque;
@@ -2006,6 +2011,36 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block,
     return 1;
 }
 
+/**
+ * ram_save_encrypted_page - send the given encrypted page to the stream
+ */
+static int ram_save_encrypted_page(RAMState *rs, PageSearchStatus *pss,
+                                   bool last_stage)
+{
+    int ret;
+    uint8_t *p;
+    RAMBlock *block = pss->block;
+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
+    uint64_t bytes_xmit;
+
+    p = block->host + offset;
+
+    ram_counters.transferred +=
+        save_page_header(rs, rs->f, block,
+                    offset | RAM_SAVE_FLAG_ENCRYPTED_PAGE);
+
+    ret = kvm_memcrypt_save_outgoing_page(rs->f, p,
+                        TARGET_PAGE_SIZE, &bytes_xmit);
+    if (ret) {
+        return -1;
+    }
+
+    ram_counters.transferred += bytes_xmit;
+    ram_counters.normal++;
+
+    return 1;
+}
+
 static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
                                  ram_addr_t offset, uint8_t *source_buf)
 {
@@ -2450,6 +2485,16 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         return res;
     }
 
+    /*
+     * If memory encryption is enabled then use memory encryption APIs
+     * to write the outgoing buffer to the wire. The encryption APIs
+     * will take care of accessing the guest memory and re-encrypt it
+     * for the transport purposes.
+     */
+     if (kvm_memcrypt_enabled()) {
+        return ram_save_encrypted_page(rs, pss, last_stage);
+     }
+
     if (save_compress_page(rs, block, offset)) {
         return 1;
     }
@@ -4271,7 +4316,8 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
         }
 
         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
-                     RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
+                     RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE |
+                     RAM_SAVE_FLAG_ENCRYPTED_PAGE)) {
             RAMBlock *block = ram_block_from_stream(f, flags);
 
             /*
@@ -4391,6 +4437,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
                 break;
             }
             break;
+        case RAM_SAVE_FLAG_ENCRYPTED_PAGE:
+            if (kvm_memcrypt_load_incoming_page(f, host)) {
+                    error_report("Failed to encrypted incoming data");
+                    ret = -EINVAL;
+            }
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             multifd_recv_sync_main();
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 04/12] kvm: add support to sync the page encryption state bitmap
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (2 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 03/12] migration/ram: add support to send encrypted pages Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 05/12] doc: update AMD SEV API spec web link Singh, Brijesh
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

The SEV VMs have concept of private and shared memory. The private memory
is encrypted with guest-specific key, while shared memory may be encrypted
with hyperivosr key. The KVM_GET_PAGE_ENC_BITMAP can be used to get a
bitmap indicating whether the guest page is private or shared. A private
page must be transmitted using the SEV migration commands.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 accel/kvm/kvm-all.c     |  1 +
 include/exec/ram_addr.h |  2 ++
 migration/ram.c         | 28 +++++++++++++++++++++++++++-
 target/i386/sev.c       | 27 +++++++++++++++++++++++++++
 4 files changed, 57 insertions(+), 1 deletion(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 4d5ff8b9f5..0654d9a7cd 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1783,6 +1783,7 @@ static int kvm_init(MachineState *ms)
         }
 
         kvm_state->memcrypt_encrypt_data = sev_encrypt_data;
+        kvm_state->memcrypt_sync_page_enc_bitmap = sev_sync_page_enc_bitmap;
     }
 
     ret = kvm_arch_init(ms, s);
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index f96777bb99..2145059afc 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -51,6 +51,8 @@ struct RAMBlock {
     unsigned long *unsentmap;
     /* bitmap of already received pages in postcopy */
     unsigned long *receivedmap;
+    /* bitmap of page encryption state for an encrypted guest */
+    unsigned long *encbmap;
 };
 
 static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset)
diff --git a/migration/ram.c b/migration/ram.c
index 3c8977d508..a8631c0896 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1680,6 +1680,9 @@ static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
     rs->migration_dirty_pages +=
         cpu_physical_memory_sync_dirty_bitmap(rb, 0, length,
                                               &rs->num_dirty_pages_period);
+    if (kvm_memcrypt_enabled()) {
+        kvm_memcrypt_sync_page_enc_bitmap(rb->host, length, rb->encbmap);
+    }
 }
 
 /**
@@ -2465,6 +2468,22 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
     return false;
 }
 
+/**
+ * encrypted_test_bitmap: check if the page is encrypted
+ *
+ * Returns a bool indicating whether the page is encrypted.
+ */
+static bool encrypted_test_bitmap(RAMState *rs, RAMBlock *block,
+                                  unsigned long page)
+{
+    /* ROM devices contains the unencrypted data */
+    if (memory_region_is_rom(block->mr)) {
+        return false;
+    }
+
+    return test_bit(page, block->encbmap);
+}
+
 /**
  * ram_save_target_page: save one target page
  *
@@ -2491,7 +2510,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
      * will take care of accessing the guest memory and re-encrypt it
      * for the transport purposes.
      */
-     if (kvm_memcrypt_enabled()) {
+     if (kvm_memcrypt_enabled() &&
+         encrypted_test_bitmap(rs, pss->block, pss->page)) {
         return ram_save_encrypted_page(rs, pss, last_stage);
      }
 
@@ -2724,6 +2744,8 @@ static void ram_save_cleanup(void *opaque)
         block->bmap = NULL;
         g_free(block->unsentmap);
         block->unsentmap = NULL;
+        g_free(block->encbmap);
+        block->encbmap = NULL;
     }
 
     xbzrle_cleanup();
@@ -3251,6 +3273,10 @@ static void ram_list_init_bitmaps(void)
                 block->unsentmap = bitmap_new(pages);
                 bitmap_set(block->unsentmap, 0, pages);
             }
+            if (kvm_memcrypt_enabled()) {
+                block->encbmap = bitmap_new(pages);
+                bitmap_set(block->encbmap, 0, pages);
+            }
         }
     }
 }
diff --git a/target/i386/sev.c b/target/i386/sev.c
index 6dbdc3cdf1..dd3814e25f 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -819,6 +819,33 @@ sev_encrypt_data(void *handle, uint8_t *ptr, uint64_t len)
     return 0;
 }
 
+int sev_sync_page_enc_bitmap(void *handle, uint8_t *host, uint64_t size,
+                            unsigned long *bitmap)
+{
+    int r;
+    unsigned long base_gpa;
+    KVMState *s = kvm_state;
+    struct kvm_page_enc_bitmap e = {};
+    unsigned long pages = size >> TARGET_PAGE_BITS;
+
+    r = kvm_physical_memory_addr_from_host(kvm_state, host, &base_gpa);
+    if (!r) {
+        return 1;
+    }
+
+    e.enc_bitmap = bitmap;
+    e.start = base_gpa >> TARGET_PAGE_BITS;
+    e.num_pages = pages;
+
+    if (kvm_vm_ioctl(s, KVM_GET_PAGE_ENC_BITMAP, &e) == -1) {
+        error_report("%s: get page_enc bitmap start 0x%llx pages 0x%llx",
+                __func__, e.start, e.num_pages);
+        return 1;
+    }
+
+    return 0;
+}
+
 static void
 sev_register_types(void)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 05/12] doc: update AMD SEV API spec web link
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (3 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 04/12] kvm: add support to sync the page encryption state bitmap Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 07/12] target/i386: sev: do not create launch context for an incoming guest Singh, Brijesh
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 docs/amd-memory-encryption.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/amd-memory-encryption.txt b/docs/amd-memory-encryption.txt
index 43bf3ee6a5..abb9a976f5 100644
--- a/docs/amd-memory-encryption.txt
+++ b/docs/amd-memory-encryption.txt
@@ -98,7 +98,7 @@ AMD Memory Encryption whitepaper:
 http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf
 
 Secure Encrypted Virtualization Key Management:
-[1] http://support.amd.com/TechDocs/55766_SEV-KM API_Specification.pdf
+[1] https://developer.amd.com/sev/ (Secure Encrypted Virtualization API)
 
 KVM Forum slides:
 http://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 07/12] target/i386: sev: do not create launch context for an incoming guest
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (4 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 05/12] doc: update AMD SEV API spec web link Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 06/12] doc: update AMD SEV to include Live migration flow Singh, Brijesh
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

The LAUNCH_START is used for creating an encryption context to encrypt
newly created guest, for an incoming guest the RECEIVE_START should be
used.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 target/i386/sev.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/target/i386/sev.c b/target/i386/sev.c
index dd3814e25f..1b05fcf9a9 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -789,10 +789,16 @@ sev_guest_init(const char *id)
         goto err;
     }
 
-    ret = sev_launch_start(s);
-    if (ret) {
-        error_report("%s: failed to create encryption context", __func__);
-        goto err;
+    /*
+     * The LAUNCH context is used for new guest, if its an incoming guest
+     * then RECEIVE context will be created after the connection is established.
+     */
+    if (!runstate_check(RUN_STATE_INMIGRATE)) {
+        ret = sev_launch_start(s);
+        if (ret) {
+            error_report("%s: failed to create encryption context", __func__);
+            goto err;
+        }
     }
 
     ram_block_notifier_add(&sev_ram_notifier);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 06/12] doc: update AMD SEV to include Live migration flow
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (5 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 07/12] target/i386: sev: do not create launch context for an incoming guest Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command Singh, Brijesh
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 docs/amd-memory-encryption.txt | 44 +++++++++++++++++++++++++++++++++-
 1 file changed, 43 insertions(+), 1 deletion(-)

diff --git a/docs/amd-memory-encryption.txt b/docs/amd-memory-encryption.txt
index abb9a976f5..757e0d931a 100644
--- a/docs/amd-memory-encryption.txt
+++ b/docs/amd-memory-encryption.txt
@@ -89,7 +89,49 @@ TODO
 
 Live Migration
 ----------------
-TODO
+AMD SEV encrypts the memory of VMs and because of this encryption is done
+using an address tweak, the hypervisor will not be able to simply copy the
+ciphertext between machines to migrate a VM. Instead the AMD SEV Key
+Management API provides a set of function which the hypervisor can use
+to package a guest page for migration, while maintaining the confidentiality
+provided by the AMD SEV.
+
+SEV guest VMs have the concept of private and shared memory. The private
+memory is encrypted with the guest-specific key, while shared memory may
+be encrypted with the hypervisor key. The migration APIs provided by the
+SEV API spec should be used migrating the private pages. The
+KVM_GET_PAGE_ENC_BITMAP ioctl can be used to get the guest page state
+bitmap. The bitmap can be used to check if the given guest page is
+private or shared.
+
+Before initiating the migration, we need to know the targets public
+Diffie-Hellman key (PDH) and certificate chain. It can retrieved
+with 'query-sev-capabilities' QMP or using the sev-tool. The
+migrate-set-sev-info object can be used to pass the targets PDH and
+certificate chain.
+
+e.g
+(QMP) migrate-sev-set-info pdh=<target_pdh> plat-cert=<target_cert_chain> \
+       amd-cert=<amd_cert>
+(QMP) migrate tcp:0:4444
+
+Note: AMD cert contain be obtained from developer.amd.com/sev.
+
+During the migration flow, on source hypervisor SEND_START is called first
+to create outgoing encryption context. Based on the SEV guest policy, the
+certificated passed through the migrate-sev-set-info will be validated
+before creating the encryption context. The SEND_UPDATE_DATA is called
+to encrypt the guest private pages. After the migration is completed the
+SEND_FINISH is called to destroy the encryption context and make the VM
+non runnable to protect it against the cloning.
+
+On target hypevisor, the RECEIVE_START is called first to create an
+incoming encryption context. The RECEIVE_UPDATE_DATA is called to copy
+the received encrypted page into guest memory. After migration of
+pages is completed, RECEIVE_FINISH is called to make the VM runnable.
+
+For more information about the migration see SEV API Appendix A
+Usage flow (Live migration section).
 
 References
 -----------------
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (6 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 06/12] doc: update AMD SEV to include Live migration flow Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 19:13   ` Eric Blake
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 09/12] target/i386: sev: add support to encrypt the outgoing page Singh, Brijesh
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

The command can be used by the hypervisor to specify the target Platform
Diffie-Hellman key (PDH) and certificate chain before starting the SEV
guest migration. The values passed through the command will be used while
creating the outgoing encryption context.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 qapi/target.json       | 18 ++++++++++++++++++
 target/i386/monitor.c  | 10 ++++++++++
 target/i386/sev-stub.c |  5 +++++
 target/i386/sev.c      | 11 +++++++++++
 target/i386/sev_i386.h |  9 ++++++++-
 5 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/qapi/target.json b/qapi/target.json
index 1d4d54b600..4109772298 100644
--- a/qapi/target.json
+++ b/qapi/target.json
@@ -512,3 +512,21 @@
 ##
 { 'command': 'query-cpu-definitions', 'returns': ['CpuDefinitionInfo'],
   'if': 'defined(TARGET_PPC) || defined(TARGET_ARM) || defined(TARGET_I386) || defined(TARGET_S390X) || defined(TARGET_MIPS)' }
+
+##
+# @migrate-set-sev-info:
+#
+# The command is used to provide the target host information used during the
+# SEV guest.
+#
+# @pdh the target host platform diffie-hellman key encoded in base64
+#
+# @plat-cert the target host platform certificate chain encoded in base64
+#
+# @amd-cert AMD certificate chain which include ASK and OCA encoded in base64
+#
+# Since 4.3
+#
+##
+{ 'command': 'migrate-set-sev-info',
+  'data': { 'pdh': 'str', 'plat-cert': 'str', 'amd-cert' : 'str' }}
diff --git a/target/i386/monitor.c b/target/i386/monitor.c
index 56e2dbece7..68e2e2b8ec 100644
--- a/target/i386/monitor.c
+++ b/target/i386/monitor.c
@@ -736,3 +736,13 @@ SevCapability *qmp_query_sev_capabilities(Error **errp)
 
     return data;
 }
+
+void qmp_migrate_set_sev_info(const char *pdh, const char *plat_cert,
+                              const char *amd_cert, Error **errp)
+{
+    if (sev_enabled()) {
+        sev_set_migrate_info(pdh, plat_cert, amd_cert);
+    } else {
+        error_setg(errp, "SEV is not enabled");
+    }
+}
diff --git a/target/i386/sev-stub.c b/target/i386/sev-stub.c
index e5ee13309c..173bfa6374 100644
--- a/target/i386/sev-stub.c
+++ b/target/i386/sev-stub.c
@@ -48,3 +48,8 @@ SevCapability *sev_get_capabilities(void)
 {
     return NULL;
 }
+
+void sev_set_migrate_info(const char *pdh, const char *plat_cert,
+                          const char *amd_cert)
+{
+}
diff --git a/target/i386/sev.c b/target/i386/sev.c
index 1b05fcf9a9..2c7c496593 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -852,6 +852,17 @@ int sev_sync_page_enc_bitmap(void *handle, uint8_t *host, uint64_t size,
     return 0;
 }
 
+void sev_set_migrate_info(const char *pdh, const char *plat_cert,
+                          const char *amd_cert)
+{
+    SEVState *s = sev_state;
+
+    s->remote_pdh = g_base64_decode(pdh, &s->remote_pdh_len);
+    s->remote_plat_cert = g_base64_decode(plat_cert,
+                                          &s->remote_plat_cert_len);
+    s->amd_cert = g_base64_decode(amd_cert, &s->amd_cert_len);
+}
+
 static void
 sev_register_types(void)
 {
diff --git a/target/i386/sev_i386.h b/target/i386/sev_i386.h
index c0f9373beb..258047ab2c 100644
--- a/target/i386/sev_i386.h
+++ b/target/i386/sev_i386.h
@@ -39,7 +39,8 @@ extern uint32_t sev_get_cbit_position(void);
 extern uint32_t sev_get_reduced_phys_bits(void);
 extern char *sev_get_launch_measurement(void);
 extern SevCapability *sev_get_capabilities(void);
-
+extern void sev_set_migrate_info(const char *pdh, const char *plat_cert,
+                                 const char *amd_cert);
 typedef struct QSevGuestInfo QSevGuestInfo;
 typedef struct QSevGuestInfoClass QSevGuestInfoClass;
 
@@ -81,6 +82,12 @@ struct SEVState {
     int sev_fd;
     SevState state;
     gchar *measurement;
+    guchar *remote_pdh;
+    size_t remote_pdh_len;
+    guchar *remote_plat_cert;
+    size_t remote_plat_cert_len;
+    guchar *amd_cert;
+    size_t amd_cert_len;
 };
 
 typedef struct SEVState SEVState;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 09/12] target/i386: sev: add support to encrypt the outgoing page
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (7 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 10/12] target/i386: sev: add support to load incoming encrypted page Singh, Brijesh
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

The sev_save_outgoing_page() provide the implementation to encrypt the
guest private pages during the transit. The routines uses the SEND_START
command to create the outgoing encryption context on the first call then
uses the SEND_UPDATE_DATA command to encrypt the data before writing it
to the socket. While encrypting the data SEND_UPDATE_DATA produces some
metadata (e.g MAC, IV). The metadata is also sent to the target machine.
After migration is completed, we issue the SEND_FINISH command to transition
the SEV guest state from sending to unrunnable state.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 accel/kvm/kvm-all.c      |   1 +
 target/i386/sev.c        | 229 +++++++++++++++++++++++++++++++++++++++
 target/i386/sev_i386.h   |   2 +
 target/i386/trace-events |   3 +
 4 files changed, 235 insertions(+)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 0654d9a7cd..85d6508e7f 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1784,6 +1784,7 @@ static int kvm_init(MachineState *ms)
 
         kvm_state->memcrypt_encrypt_data = sev_encrypt_data;
         kvm_state->memcrypt_sync_page_enc_bitmap = sev_sync_page_enc_bitmap;
+        kvm_state->memcrypt_save_outgoing_page = sev_save_outgoing_page;
     }
 
     ret = kvm_arch_init(ms, s);
diff --git a/target/i386/sev.c b/target/i386/sev.c
index 2c7c496593..b5aa53ec44 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -27,6 +27,8 @@
 #include "sysemu/sysemu.h"
 #include "trace.h"
 #include "migration/blocker.h"
+#include "migration/qemu-file.h"
+#include "migration/misc.h"
 
 #define DEFAULT_GUEST_POLICY    0x1 /* disable debug */
 #define DEFAULT_SEV_DEVICE      "/dev/sev"
@@ -718,6 +720,39 @@ sev_vm_state_change(void *opaque, int running, RunState state)
     }
 }
 
+static void
+sev_send_finish(void)
+{
+    int ret, error;
+
+    trace_kvm_sev_send_finish();
+    ret = sev_ioctl(sev_state->sev_fd, KVM_SEV_SEND_FINISH, 0, &error);
+    if (ret) {
+        error_report("%s: LAUNCH_FINISH ret=%d fw_error=%d '%s'",
+                     __func__, ret, error, fw_error_to_str(error));
+    }
+
+    sev_set_guest_state(SEV_STATE_RUNNING);
+}
+
+static void
+sev_migration_state_notifier(Notifier *notifier, void *data)
+{
+    MigrationState *s = data;
+
+    if (migration_has_finished(s) ||
+        migration_in_postcopy_after_devices(s) ||
+        migration_has_failed(s)) {
+        if (sev_check_state(SEV_STATE_SEND_UPDATE)) {
+            sev_send_finish();
+        }
+    }
+}
+
+static Notifier sev_migration_state_notify = {
+    .notify = sev_migration_state_notifier,
+};
+
 void *
 sev_guest_init(const char *id)
 {
@@ -804,6 +839,7 @@ sev_guest_init(const char *id)
     ram_block_notifier_add(&sev_ram_notifier);
     qemu_add_machine_init_done_notifier(&sev_machine_done_notify);
     qemu_add_vm_change_state_handler(sev_vm_state_change, s);
+    add_migration_state_change_notifier(&sev_migration_state_notify);
 
     return s;
 err:
@@ -863,6 +899,199 @@ void sev_set_migrate_info(const char *pdh, const char *plat_cert,
     s->amd_cert = g_base64_decode(amd_cert, &s->amd_cert_len);
 }
 
+static int
+sev_get_send_session_length(void)
+{
+    int ret, fw_err = 0;
+    struct kvm_sev_send_start *start;
+
+    start = g_new0(struct kvm_sev_send_start, 1);
+
+    ret = sev_ioctl(sev_state->sev_fd, KVM_SEV_SEND_START, start, &fw_err);
+    if (fw_err != SEV_RET_INVALID_LEN) {
+        ret = -1;
+        error_report("%s: failed to get session length ret=%d fw_error=%d '%s'",
+                     __func__, ret, fw_err, fw_error_to_str(fw_err));
+        goto err;
+    }
+
+    ret = start->session_len;
+err:
+    g_free(start);
+    return ret;
+}
+
+static int
+sev_send_start(SEVState *s, QEMUFile *f, uint64_t *bytes_sent)
+{
+    gsize pdh_len = 0, plat_cert_len;
+    int session_len, ret, fw_error;
+    struct kvm_sev_send_start *start;
+    guchar *pdh = NULL, *plat_cert = NULL, *session = NULL;
+
+    if (!s->remote_pdh || !s->remote_plat_cert) {
+        error_report("%s: missing remote PDH or PLAT_CERT", __func__);
+        return 1;
+    }
+
+    start = g_new0(struct kvm_sev_send_start, 1);
+
+    start->pdh_cert_uaddr = (unsigned long) s->remote_pdh;
+    start->pdh_cert_len = s->remote_pdh_len;
+
+    start->plat_cert_uaddr = (unsigned long)s->remote_plat_cert;
+    start->plat_cert_len = s->remote_plat_cert_len;
+
+    start->amd_cert_uaddr = (unsigned long)s->amd_cert;
+    start->amd_cert_len = s->amd_cert_len;
+
+    /* get the session length */
+    session_len = sev_get_send_session_length();
+    if (session_len < 0) {
+        ret = 1;
+        goto err;
+    }
+
+    session = g_new0(guchar, session_len);
+    start->session_uaddr = (unsigned long)session;
+    start->session_len = session_len;
+
+    /* Get our PDH certificate */
+    ret = sev_get_pdh_info(s->sev_fd, &pdh, &pdh_len,
+                           &plat_cert, &plat_cert_len);
+    if (ret) {
+        error_report("Failed to get our PDH cert");
+        goto err;
+    }
+
+    trace_kvm_sev_send_start(start->pdh_cert_uaddr, start->pdh_cert_len,
+                             start->plat_cert_uaddr, start->plat_cert_len,
+                             start->amd_cert_uaddr, start->amd_cert_len);
+
+    ret = sev_ioctl(s->sev_fd, KVM_SEV_SEND_START, start, &fw_error);
+    if (ret < 0) {
+        error_report("%s: SEND_START ret=%d fw_error=%d '%s'",
+                __func__, ret, fw_error, fw_error_to_str(fw_error));
+        goto err;
+    }
+
+    qemu_put_be32(f, start->policy);
+    qemu_put_be32(f, pdh_len);
+    qemu_put_buffer(f, (uint8_t *)pdh, pdh_len);
+    qemu_put_be32(f, start->session_len);
+    qemu_put_buffer(f, (uint8_t *)start->session_uaddr, start->session_len);
+    *bytes_sent = 12 + pdh_len + start->session_len;
+
+    sev_set_guest_state(SEV_STATE_SEND_UPDATE);
+
+err:
+    g_free(start);
+    g_free(pdh);
+    g_free(plat_cert);
+    return ret;
+}
+
+static int
+sev_send_get_packet_len(int *fw_err)
+{
+    int ret;
+    struct kvm_sev_send_update_data *update;
+
+    update = g_malloc0(sizeof(*update));
+    if (!update) {
+        return -1;
+    }
+
+    ret = sev_ioctl(sev_state->sev_fd, KVM_SEV_SEND_UPDATE_DATA, update, fw_err);
+    if (*fw_err != SEV_RET_INVALID_LEN) {
+        ret = -1;
+        error_report("%s: failed to get session length ret=%d fw_error=%d '%s'",
+                    __func__, ret, *fw_err, fw_error_to_str(*fw_err));
+        goto err;
+    }
+
+    ret = update->hdr_len;
+
+err:
+    g_free(update);
+    return ret;
+}
+
+static int
+sev_send_update_data(SEVState *s, QEMUFile *f, uint8_t *ptr, uint32_t size,
+                     uint64_t *bytes_sent)
+{
+    int ret, fw_error;
+    guchar *trans;
+    struct kvm_sev_send_update_data *update;
+
+    /* If this is first call then query the packet header bytes and allocate
+     * the packet buffer.
+     */
+    if (!s->send_packet_hdr) {
+        s->send_packet_hdr_len = sev_send_get_packet_len(&fw_error);
+        if (s->send_packet_hdr_len < 1) {
+            error_report("%s: SEND_UPDATE fw_error=%d '%s'",
+                    __func__, fw_error, fw_error_to_str(fw_error));
+            return 1;
+        }
+
+        s->send_packet_hdr = g_new(gchar, s->send_packet_hdr_len);
+    }
+
+    update = g_new0(struct kvm_sev_send_update_data, 1);
+
+    /* allocate transport buffer */
+    trans = g_new(guchar, size);
+
+    update->hdr_uaddr = (unsigned long)s->send_packet_hdr;
+    update->hdr_len = s->send_packet_hdr_len;
+    update->guest_uaddr = (unsigned long)ptr;
+    update->guest_len = size;
+    update->trans_uaddr = (unsigned long)trans;
+    update->trans_len = size;
+
+    trace_kvm_sev_send_update_data(ptr, trans, size);
+
+    ret = sev_ioctl(s->sev_fd, KVM_SEV_SEND_UPDATE_DATA, update, &fw_error);
+    if (ret) {
+        error_report("%s: SEND_UPDATE_DATA ret=%d fw_error=%d '%s'",
+                __func__, ret, fw_error, fw_error_to_str(fw_error));
+        goto err;
+    }
+
+    qemu_put_be32(f, update->hdr_len);
+    qemu_put_buffer(f, (uint8_t *)update->hdr_uaddr, update->hdr_len);
+    *bytes_sent = 4 + update->hdr_len;
+
+    qemu_put_be32(f, update->trans_len);
+    qemu_put_buffer(f, (uint8_t *)update->trans_uaddr, update->trans_len);
+    *bytes_sent += (4 + update->trans_len);
+
+err:
+    g_free(trans);
+    g_free(update);
+    return ret;
+}
+
+int sev_save_outgoing_page(void *handle, QEMUFile *f, uint8_t *ptr,
+                           uint32_t sz, uint64_t *bytes_sent)
+{
+    SEVState *s = sev_state;
+
+    /*
+     * If this is a first buffer then create outgoing encryption context
+     * and write our PDH, policy and session data.
+     */
+    if (!sev_check_state(SEV_STATE_SEND_UPDATE) &&
+        sev_send_start(s, f, bytes_sent)) {
+        error_report("Failed to create outgoing context");
+        return 1;
+    }
+
+    return sev_send_update_data(s, f, ptr, sz, bytes_sent);
+}
+
 static void
 sev_register_types(void)
 {
diff --git a/target/i386/sev_i386.h b/target/i386/sev_i386.h
index 258047ab2c..38893fb1fa 100644
--- a/target/i386/sev_i386.h
+++ b/target/i386/sev_i386.h
@@ -88,6 +88,8 @@ struct SEVState {
     size_t remote_plat_cert_len;
     guchar *amd_cert;
     size_t amd_cert_len;
+    gchar *send_packet_hdr;
+    size_t send_packet_hdr_len;
 };
 
 typedef struct SEVState SEVState;
diff --git a/target/i386/trace-events b/target/i386/trace-events
index 789c700d4a..b41516cf9f 100644
--- a/target/i386/trace-events
+++ b/target/i386/trace-events
@@ -15,3 +15,6 @@ kvm_sev_launch_start(int policy, void *session, void *pdh) "policy 0x%x session
 kvm_sev_launch_update_data(void *addr, uint64_t len) "addr %p len 0x%" PRIu64
 kvm_sev_launch_measurement(const char *value) "data %s"
 kvm_sev_launch_finish(void) ""
+kvm_sev_send_start(uint64_t pdh, int l1, uint64_t plat, int l2, uint64_t amd, int l3) "pdh 0x%" PRIx64 " len %d plat 0x%" PRIx64 " len %d amd 0x%" PRIx64 " len %d"
+kvm_sev_send_update_data(void *src, void *dst, int len) "guest %p trans %p len %d"
+kvm_sev_send_finish(void) ""
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 10/12] target/i386: sev: add support to load incoming encrypted page
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (8 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 09/12] target/i386: sev: add support to encrypt the outgoing page Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 11/12] migration: add support to migrate page encryption bitmap Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 12/12] target/i386: sev: remove migration blocker Singh, Brijesh
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

The sev_load_incoming_page() provide the implementation to read the
incoming guest private pages from the socket and load it into the guest
memory. The routines uses the RECEIVE_START command to create the
incoming encryption context on the first call then uses the
RECEIEVE_UPDATE_DATA command to load the encrypted pages into the guest
memory. After migration is completed, we issue the RECEIVE_FINISH command
to transition the SEV guest to the runnable state so that it can be
executed.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 accel/kvm/kvm-all.c      |   1 +
 target/i386/sev.c        | 126 ++++++++++++++++++++++++++++++++++++++-
 target/i386/trace-events |   3 +
 3 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 85d6508e7f..fe65c8eb5d 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1785,6 +1785,7 @@ static int kvm_init(MachineState *ms)
         kvm_state->memcrypt_encrypt_data = sev_encrypt_data;
         kvm_state->memcrypt_sync_page_enc_bitmap = sev_sync_page_enc_bitmap;
         kvm_state->memcrypt_save_outgoing_page = sev_save_outgoing_page;
+        kvm_state->memcrypt_load_incoming_page = sev_load_incoming_page;
     }
 
     ret = kvm_arch_init(ms, s);
diff --git a/target/i386/sev.c b/target/i386/sev.c
index b5aa53ec44..b7feedce7d 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -708,13 +708,34 @@ sev_launch_finish(SEVState *s)
     }
 }
 
+static int
+sev_receive_finish(SEVState *s)
+{
+    int error, ret = 1;
+
+    trace_kvm_sev_receive_finish();
+    ret = sev_ioctl(s->sev_fd, KVM_SEV_RECEIVE_FINISH, 0, &error);
+    if (ret) {
+        error_report("%s: RECEIVE_FINISH ret=%d fw_error=%d '%s'",
+                __func__, ret, error, fw_error_to_str(error));
+        goto err;
+    }
+
+    sev_set_guest_state(SEV_STATE_RUNNING);
+err:
+    return ret;
+}
+
+
 static void
 sev_vm_state_change(void *opaque, int running, RunState state)
 {
     SEVState *s = opaque;
 
     if (running) {
-        if (!sev_check_state(SEV_STATE_RUNNING)) {
+        if (sev_check_state(SEV_STATE_RECEIVE_UPDATE)) {
+            sev_receive_finish(s);
+        } else if (!sev_check_state(SEV_STATE_RUNNING)) {
             sev_launch_finish(s);
         }
     }
@@ -1092,6 +1113,109 @@ int sev_save_outgoing_page(void *handle, QEMUFile *f, uint8_t *ptr,
     return sev_send_update_data(s, f, ptr, sz, bytes_sent);
 }
 
+static int
+sev_receive_start(QSevGuestInfo *sev, QEMUFile *f)
+{
+    int ret = 1;
+    int fw_error;
+    struct kvm_sev_receive_start *start;
+    gchar *session = NULL, *pdh_cert = NULL;
+
+    start = g_new0(struct kvm_sev_receive_start, 1);
+
+    /* get SEV guest handle */
+    start->handle = object_property_get_int(OBJECT(sev), "handle",
+            &error_abort);
+
+    /* get the source policy */
+    start->policy = qemu_get_be32(f);
+
+    /* get source PDH key */
+    start->pdh_len = qemu_get_be32(f);
+    pdh_cert = g_new(gchar, start->pdh_len);
+    qemu_get_buffer(f, (uint8_t *)pdh_cert, start->pdh_len);
+    start->pdh_uaddr = (unsigned long)pdh_cert;
+
+    /* get source session data */
+    start->session_len = qemu_get_be32(f);
+    session = g_new(gchar, start->session_len);
+    qemu_get_buffer(f, (uint8_t *)session, start->session_len);
+    start->session_uaddr = (unsigned long)session;
+
+    trace_kvm_sev_receive_start(start->policy, session, pdh_cert);
+
+    ret = sev_ioctl(sev_state->sev_fd, KVM_SEV_RECEIVE_START, start, &fw_error);
+    if (ret < 0) {
+        error_report("Error RECEIVE_START ret=%d fw_error=%d '%s'",
+                ret, fw_error, fw_error_to_str(fw_error));
+        goto err;
+    }
+
+    object_property_set_int(OBJECT(sev), start->handle, "handle", &error_abort);
+    sev_set_guest_state(SEV_STATE_RECEIVE_UPDATE);
+err:
+    g_free(start);
+    g_free(session);
+    g_free(pdh_cert);
+
+    return ret;
+}
+
+static int sev_receive_update_data(QEMUFile *f, uint8_t *ptr)
+{
+    int ret = 1, fw_error = 0;
+    gchar *hdr = NULL, *trans = NULL;
+    struct kvm_sev_receive_update_data *update;
+
+    update = g_new0(struct kvm_sev_receive_update_data, 1);
+
+    /* get packet header */
+    update->hdr_len = qemu_get_be32(f);
+    hdr = g_new(gchar, update->hdr_len);
+    qemu_get_buffer(f, (uint8_t *)hdr, update->hdr_len);
+    update->hdr_uaddr = (unsigned long)hdr;
+
+    /* get transport buffer */
+    update->trans_len = qemu_get_be32(f);
+    trans = g_new(gchar, update->trans_len);
+    update->trans_uaddr = (unsigned long)trans;
+    qemu_get_buffer(f, (uint8_t *)update->trans_uaddr, update->trans_len);
+
+    update->guest_uaddr = (unsigned long) ptr;
+    update->guest_len = update->trans_len;
+
+    trace_kvm_sev_receive_update_data(trans, ptr, update->guest_len,
+            hdr, update->hdr_len);
+
+    ret = sev_ioctl(sev_state->sev_fd, KVM_SEV_RECEIVE_UPDATE_DATA,
+                    update, &fw_error);
+    if (ret) {
+        error_report("Error RECEIVE_UPDATE_DATA ret=%d fw_error=%d '%s'",
+                ret, fw_error, fw_error_to_str(fw_error));
+        goto err;
+    }
+err:
+    g_free(trans);
+    g_free(update);
+    g_free(hdr);
+    return ret;
+}
+
+int sev_load_incoming_page(void *handle, QEMUFile *f, uint8_t *ptr)
+{
+    SEVState *s = (SEVState *)handle;
+
+    /* If this is first buffer and SEV is not in recieiving state then
+     * use RECEIVE_START command to create a encryption context.
+     */
+    if (!sev_check_state(SEV_STATE_RECEIVE_UPDATE) &&
+        sev_receive_start(s->sev_info, f)) {
+        return 1;
+    }
+
+    return sev_receive_update_data(f, ptr);
+}
+
 static void
 sev_register_types(void)
 {
diff --git a/target/i386/trace-events b/target/i386/trace-events
index b41516cf9f..609752cca7 100644
--- a/target/i386/trace-events
+++ b/target/i386/trace-events
@@ -18,3 +18,6 @@ kvm_sev_launch_finish(void) ""
 kvm_sev_send_start(uint64_t pdh, int l1, uint64_t plat, int l2, uint64_t amd, int l3) "pdh 0x%" PRIx64 " len %d plat 0x%" PRIx64 " len %d amd 0x%" PRIx64 " len %d"
 kvm_sev_send_update_data(void *src, void *dst, int len) "guest %p trans %p len %d"
 kvm_sev_send_finish(void) ""
+kvm_sev_receive_start(int policy, void *session, void *pdh) "policy 0x%x session %p pdh %p"
+kvm_sev_receive_update_data(void *src, void *dst, int len, void *hdr, int hdr_len) "guest %p trans %p len %d hdr %p hdr_len %d"
+kvm_sev_receive_finish(void) ""
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 11/12] migration: add support to migrate page encryption bitmap
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (9 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 10/12] target/i386: sev: add support to load incoming encrypted page Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 12/12] target/i386: sev: remove migration blocker Singh, Brijesh
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

When memory encryption is enabled, the hypervisor maintains a page
encryption bitmap which is referred by hypervisor during migratoin to check
if page is private or shared. The bitmap is built during the VM bootup and
must be migrated to the target host so that hypervisor on target host can
use it for future migration.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 accel/kvm/kvm-all.c      |  4 +++
 migration/ram.c          | 43 +++++++++++++++++++++++++++++-
 target/i386/sev.c        | 56 ++++++++++++++++++++++++++++++++++++++++
 target/i386/trace-events |  3 +++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index fe65c8eb5d..0d75ad94f8 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1786,6 +1786,10 @@ static int kvm_init(MachineState *ms)
         kvm_state->memcrypt_sync_page_enc_bitmap = sev_sync_page_enc_bitmap;
         kvm_state->memcrypt_save_outgoing_page = sev_save_outgoing_page;
         kvm_state->memcrypt_load_incoming_page = sev_load_incoming_page;
+        kvm_state->memcrypt_load_incoming_page_enc_bitmap =
+            sev_load_incoming_page_enc_bitmap;
+        kvm_state->memcrypt_save_outgoing_page_enc_bitmap =
+            sev_save_outgoing_page_enc_bitmap;
     }
 
     ret = kvm_arch_init(ms, s);
diff --git a/migration/ram.c b/migration/ram.c
index a8631c0896..5c8403588f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -78,6 +78,7 @@
 /* 0x80 is reserved in migration.h start with 0x100 next */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
 #define RAM_SAVE_FLAG_ENCRYPTED_PAGE   0x200
+#define RAM_SAVE_FLAG_PAGE_ENCRYPTED_BITMAP       0x400
 
 static inline bool is_zero_range(uint8_t *p, uint64_t size)
 {
@@ -3551,6 +3552,35 @@ out:
     return done;
 }
 
+/**
+ * migration_save_page_enc_bitmap: function to send the page enc bitmap
+ *
+ * Returns zero to indicate success or negative on error
+ */
+static int migration_save_page_enc_bitmap(QEMUFile *f, RAMState *rs)
+{
+    int r;
+    RAMBlock *block;
+
+    RAMBLOCK_FOREACH_MIGRATABLE(block) {
+        /* ROM region does not encrypted data, skip sending the bitmap */
+        if (memory_region_is_rom(block->mr)) {
+            continue;
+        }
+
+        qemu_put_be64(f, RAM_SAVE_FLAG_PAGE_ENCRYPTED_BITMAP);
+        qemu_put_byte(f, strlen(block->idstr));
+        qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
+        r = kvm_memcrypt_save_outgoing_page_enc_bitmap(f, block->host,
+                block->max_length, block->encbmap);
+        if (r) {
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
 /**
  * ram_save_complete: function called to send the remaining amount of ram
  *
@@ -3595,6 +3625,10 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     flush_compressed_data(rs);
     ram_control_after_iterate(f, RAM_CONTROL_FINISH);
 
+    if (kvm_memcrypt_enabled()) {
+        ret = migration_save_page_enc_bitmap(f, rs);
+    }
+
     rcu_read_unlock();
 
     multifd_send_sync_main();
@@ -4343,7 +4377,8 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
 
         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
                      RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE |
-                     RAM_SAVE_FLAG_ENCRYPTED_PAGE)) {
+                     RAM_SAVE_FLAG_ENCRYPTED_PAGE |
+                     RAM_SAVE_FLAG_PAGE_ENCRYPTED_BITMAP)) {
             RAMBlock *block = ram_block_from_stream(f, flags);
 
             /*
@@ -4469,6 +4504,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
                     ret = -EINVAL;
             }
             break;
+        case RAM_SAVE_FLAG_PAGE_ENCRYPTED_BITMAP:
+            if (kvm_memcrypt_load_incoming_page_enc_bitmap(f)) {
+                error_report("Failed to load page enc bitmap");
+                ret = -EINVAL;
+            }
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             multifd_recv_sync_main();
diff --git a/target/i386/sev.c b/target/i386/sev.c
index b7feedce7d..dc1e974d93 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -896,6 +896,8 @@ int sev_sync_page_enc_bitmap(void *handle, uint8_t *host, uint64_t size,
         return 1;
     }
 
+    trace_kvm_sev_sync_page_enc_bitmap(base_gpa, size);
+
     e.enc_bitmap = bitmap;
     e.start = base_gpa >> TARGET_PAGE_BITS;
     e.num_pages = pages;
@@ -1216,6 +1218,60 @@ int sev_load_incoming_page(void *handle, QEMUFile *f, uint8_t *ptr)
     return sev_receive_update_data(f, ptr);
 }
 
+int sev_load_incoming_page_enc_bitmap(void *handle, QEMUFile *f)
+{
+    void *bmap;
+    unsigned long pages, length;
+    unsigned long bmap_size, base_gpa;
+    struct kvm_page_enc_bitmap e = {};
+
+    base_gpa = qemu_get_be64(f);
+    length = qemu_get_be64(f);
+    pages = length >> TARGET_PAGE_BITS;
+
+    bmap_size = BITS_TO_LONGS(pages) * sizeof(unsigned long);
+    bmap = g_malloc0(bmap_size);
+    qemu_get_buffer(f, (uint8_t *)bmap, bmap_size);
+
+    trace_kvm_sev_load_page_enc_bitmap(base_gpa, length);
+
+    e.start = base_gpa >> TARGET_PAGE_BITS;
+    e.num_pages = pages;
+    e.enc_bitmap = bmap;
+    if (kvm_vm_ioctl(kvm_state, KVM_SET_PAGE_ENC_BITMAP, &e) == -1) {
+        error_report("KVM_SET_PAGE_ENC_BITMAP ioctl failed %d", errno);
+        g_free(bmap);
+        return 1;
+    }
+
+    g_free(bmap);
+
+    return 0;
+}
+
+int sev_save_outgoing_page_enc_bitmap(void *handle, QEMUFile *f,
+                                      uint8_t *host, uint64_t length,
+                                      unsigned long *bmap)
+{
+    int r;
+    unsigned long base_gpa;
+    unsigned long pages = length >> TARGET_PAGE_BITS;
+    unsigned long bmap_sz = BITS_TO_LONGS(pages) * sizeof(unsigned long);
+
+    r = kvm_physical_memory_addr_from_host(kvm_state, host, &base_gpa);
+    if (!r) {
+        return 1;
+    }
+
+    trace_kvm_sev_save_page_enc_bitmap(base_gpa, length);
+
+    qemu_put_be64(f, base_gpa);
+    qemu_put_be64(f, length);
+    qemu_put_buffer(f, (uint8_t *)bmap, bmap_sz);
+
+    return 0;
+}
+
 static void
 sev_register_types(void)
 {
diff --git a/target/i386/trace-events b/target/i386/trace-events
index 609752cca7..fe914c9048 100644
--- a/target/i386/trace-events
+++ b/target/i386/trace-events
@@ -21,3 +21,6 @@ kvm_sev_send_finish(void) ""
 kvm_sev_receive_start(int policy, void *session, void *pdh) "policy 0x%x session %p pdh %p"
 kvm_sev_receive_update_data(void *src, void *dst, int len, void *hdr, int hdr_len) "guest %p trans %p len %d hdr %p hdr_len %d"
 kvm_sev_receive_finish(void) ""
+kvm_sev_sync_page_enc_bitmap(uint64_t start, uint64_t len) "start 0x%" PRIx64 " len 0x%" PRIx64
+kvm_sev_save_page_enc_bitmap(uint64_t start, uint64_t len) "start 0x%" PRIx64 " len 0x%" PRIx64
+kvm_sev_load_page_enc_bitmap(uint64_t start, uint64_t len) "start 0x%" PRIx64 " len 0x%" PRIx64
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [RFC PATCH v1 12/12] target/i386: sev: remove migration blocker
  2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
                   ` (10 preceding siblings ...)
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 11/12] migration: add support to migrate page encryption bitmap Singh, Brijesh
@ 2019-06-20 18:03 ` Singh, Brijesh
  11 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 18:03 UTC (permalink / raw)
  To: qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 target/i386/sev.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/target/i386/sev.c b/target/i386/sev.c
index dc1e974d93..095ef4c729 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -34,7 +34,6 @@
 #define DEFAULT_SEV_DEVICE      "/dev/sev"
 
 static SEVState *sev_state;
-static Error *sev_mig_blocker;
 
 static const char *const sev_fw_errlist[] = {
     "",
@@ -685,7 +684,6 @@ static void
 sev_launch_finish(SEVState *s)
 {
     int ret, error;
-    Error *local_err = NULL;
 
     trace_kvm_sev_launch_finish();
     ret = sev_ioctl(sev_state->sev_fd, KVM_SEV_LAUNCH_FINISH, 0, &error);
@@ -696,16 +694,6 @@ sev_launch_finish(SEVState *s)
     }
 
     sev_set_guest_state(SEV_STATE_RUNNING);
-
-    /* add migration blocker */
-    error_setg(&sev_mig_blocker,
-               "SEV: Migration is not implemented");
-    ret = migrate_add_blocker(sev_mig_blocker, &local_err);
-    if (local_err) {
-        error_report_err(local_err);
-        error_free(sev_mig_blocker);
-        exit(1);
-    }
 }
 
 static int
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command
  2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command Singh, Brijesh
@ 2019-06-20 19:13   ` Eric Blake
  2019-06-20 19:18     ` Singh, Brijesh
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Blake @ 2019-06-20 19:13 UTC (permalink / raw)
  To: Singh, Brijesh, qemu-devel; +Cc: Lendacky, Thomas, kvm


[-- Attachment #1.1: Type: text/plain, Size: 1642 bytes --]

On 6/20/19 1:03 PM, Singh, Brijesh wrote:
> The command can be used by the hypervisor to specify the target Platform
> Diffie-Hellman key (PDH) and certificate chain before starting the SEV
> guest migration. The values passed through the command will be used while
> creating the outgoing encryption context.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  qapi/target.json       | 18 ++++++++++++++++++
>  target/i386/monitor.c  | 10 ++++++++++
>  target/i386/sev-stub.c |  5 +++++
>  target/i386/sev.c      | 11 +++++++++++
>  target/i386/sev_i386.h |  9 ++++++++-
>  5 files changed, 52 insertions(+), 1 deletion(-)
> 

> +++ b/qapi/target.json
> @@ -512,3 +512,21 @@
>  ##
>  { 'command': 'query-cpu-definitions', 'returns': ['CpuDefinitionInfo'],
>    'if': 'defined(TARGET_PPC) || defined(TARGET_ARM) || defined(TARGET_I386) || defined(TARGET_S390X) || defined(TARGET_MIPS)' }
> +
> +##
> +# @migrate-set-sev-info:
> +#
> +# The command is used to provide the target host information used during the
> +# SEV guest.
> +#
> +# @pdh the target host platform diffie-hellman key encoded in base64
> +#
> +# @plat-cert the target host platform certificate chain encoded in base64
> +#
> +# @amd-cert AMD certificate chain which include ASK and OCA encoded in base64
> +#
> +# Since 4.3

The next release is 4.1, then likely 4.2 near the end of the calendar
year, then 5.0 in 2020. There is no planned 4.3 release.  Are you trying
to get this in 4.1?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command
  2019-06-20 19:13   ` Eric Blake
@ 2019-06-20 19:18     ` Singh, Brijesh
  0 siblings, 0 replies; 15+ messages in thread
From: Singh, Brijesh @ 2019-06-20 19:18 UTC (permalink / raw)
  To: Eric Blake, qemu-devel; +Cc: Lendacky, Thomas, Singh, Brijesh, kvm



On 6/20/19 2:13 PM, Eric Blake wrote:
> On 6/20/19 1:03 PM, Singh, Brijesh wrote:
>> The command can be used by the hypervisor to specify the target Platform
>> Diffie-Hellman key (PDH) and certificate chain before starting the SEV
>> guest migration. The values passed through the command will be used while
>> creating the outgoing encryption context.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>   qapi/target.json       | 18 ++++++++++++++++++
>>   target/i386/monitor.c  | 10 ++++++++++
>>   target/i386/sev-stub.c |  5 +++++
>>   target/i386/sev.c      | 11 +++++++++++
>>   target/i386/sev_i386.h |  9 ++++++++-
>>   5 files changed, 52 insertions(+), 1 deletion(-)
>>
> 
>> +++ b/qapi/target.json
>> @@ -512,3 +512,21 @@
>>   ##
>>   { 'command': 'query-cpu-definitions', 'returns': ['CpuDefinitionInfo'],
>>     'if': 'defined(TARGET_PPC) || defined(TARGET_ARM) || defined(TARGET_I386) || defined(TARGET_S390X) || defined(TARGET_MIPS)' }
>> +
>> +##
>> +# @migrate-set-sev-info:
>> +#
>> +# The command is used to provide the target host information used during the
>> +# SEV guest.
>> +#
>> +# @pdh the target host platform diffie-hellman key encoded in base64
>> +#
>> +# @plat-cert the target host platform certificate chain encoded in base64
>> +#
>> +# @amd-cert AMD certificate chain which include ASK and OCA encoded in base64
>> +#
>> +# Since 4.3
> 
> The next release is 4.1, then likely 4.2 near the end of the calendar
> year, then 5.0 in 2020. There is no planned 4.3 release.  Are you trying
> to get this in 4.1?


Ah, I was meaning to type 4.2 and not 4.3. The series has dependency on
kernel patches, my best effort it to get it ready for 4.2 merge
window.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-06-20 20:20 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-20 18:03 [Qemu-devel] [RFC PATCH v1 00/12] Add SEV guest live migration support Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 02/12] kvm: introduce high-level API to support encrypted guest migration Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 01/12] linux-headers: update kernel header to include SEV migration commands Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 03/12] migration/ram: add support to send encrypted pages Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 04/12] kvm: add support to sync the page encryption state bitmap Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 05/12] doc: update AMD SEV API spec web link Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 07/12] target/i386: sev: do not create launch context for an incoming guest Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 06/12] doc: update AMD SEV to include Live migration flow Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 08/12] target.json: add migrate-set-sev-info command Singh, Brijesh
2019-06-20 19:13   ` Eric Blake
2019-06-20 19:18     ` Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 09/12] target/i386: sev: add support to encrypt the outgoing page Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 10/12] target/i386: sev: add support to load incoming encrypted page Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 11/12] migration: add support to migrate page encryption bitmap Singh, Brijesh
2019-06-20 18:03 ` [Qemu-devel] [RFC PATCH v1 12/12] target/i386: sev: remove migration blocker Singh, Brijesh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).