All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Add AMD SEV and SEV-ES local migration support
@ 2021-06-21 16:31 Peter Gonda
  2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Peter Gonda @ 2021-06-21 16:31 UTC (permalink / raw)
  To: kvm
  Cc: Peter Gonda, Paolo Bonzini, Sean Christopherson, David Rientjes,
	Dr . David Alan Gilbert, Brijesh Singh, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

Local migration provides a low-cost mechanism for userspace VMM upgrades.
It is an alternative to traditional (i.e., remote) live migration. Whereas
remote migration handles move a guest to a new host, local migration only
handles moving a guest to a new userspace VMM within a host.  This can be
used to update, rollback, change flags of the VMM, etc. The lower cost
compared to live migration comes from the fact that the guest's memory does
not need to be copied between processes. A handle to the guest memory
simply gets passed to the new VMM, this could be done via using /dev/shm
with share=on or similar feature.

The guest state can be transferred from an old VMM to a new VMM as follows:
1. Export guest state from KVM to the old user-space VMM via a getter
user-space/kernel API 2. Transfer guest state from old VMM to new VMM via
IPC communication 3. Import guest state into KVM from the new user-space
VMM via a setter user-space/kernel API VMMs by exporting from KVM using
getters, sending that data to the new VMM, then setting it again in KVM.

In the common case for local migration, we can rely on the normal ioctls
for passing data from one VMM to the next. SEV, SEV-ES, and other
confidential compute environments make most of this information opaque, and
render KVM ioctls such as "KVM_GET_REGS" irrelevant.  As a result, we need
the ability to pass this opaque metadata from one VMM to the next. The
easiest way to do this is to leave this data in the kernel, and transfer
ownership of the metadata from one KVM VM (or vCPU) to the next. For
example, we need to move the SEV enabled ASID, VMSAs, and GHCB metadata
from one VMM to the next.  In general, we need to be able to hand off any
data that would be unsafe/impossible for the kernel to hand directly to
userspace (and cannot be reproduced using data that can be handed safely to
userspace).

During the local send operation the SEV required metadata, the guest's
ASID is loaded into a kvm wide hashmap keyed by a value given by
userspace. This allows the userspace VMM to pass the key to the target
VMM. Then on local receive the target VMM can be loaded with the
metadata from the hashmap.

Peter Gonda (3):
  KVM, SEV: Refactor out function for unregistering encrypted regions
  KVM, SEV: Add support for SEV local migration
  KVM, SEV: Add support for SEV-ES local migration

 .../virt/kvm/amd-memory-encryption.rst        |  43 ++
 arch/x86/kvm/svm/sev.c                        | 434 +++++++++++++++++-
 arch/x86/kvm/svm/svm.h                        |   1 +
 include/uapi/linux/kvm.h                      |  12 +
 4 files changed, 471 insertions(+), 19 deletions(-)

base-commit: f1b832550832

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

-- 
2.32.0.288.g62a8d224e6-goog


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions
  2021-06-21 16:31 [PATCH 0/3] Add AMD SEV and SEV-ES local migration support Peter Gonda
@ 2021-06-21 16:31 ` Peter Gonda
  2021-07-12 20:05   ` Brijesh Singh
                     ` (2 more replies)
  2021-06-21 16:31 ` [PATCH 2/3] KVM, SEV: Add support for SEV local migration Peter Gonda
  2021-06-21 16:31 ` [PATCH 3/3] KVM, SEV: Add support for SEV-ES " Peter Gonda
  2 siblings, 3 replies; 14+ messages in thread
From: Peter Gonda @ 2021-06-21 16:31 UTC (permalink / raw)
  To: kvm
  Cc: Peter Gonda, Paolo Bonzini, Sean Christopherson, David Rientjes,
	Dr . David Alan Gilbert, Brijesh Singh, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

Factor out helper function for freeing the encrypted region list.

Signed-off-by: Peter Gonda <pgonda@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

---
 arch/x86/kvm/svm/sev.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 46e339c84998..5af46ff6ec48 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1767,11 +1767,25 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
 	return ret;
 }
 
+static void __unregister_region_list_locked(struct kvm *kvm,
+					    struct list_head *mem_regions)
+{
+	struct enc_region *pos, *q;
+
+	lockdep_assert_held(&kvm->lock);
+
+	if (list_empty(mem_regions))
+		return;
+
+	list_for_each_entry_safe(pos, q, mem_regions, list) {
+		__unregister_enc_region_locked(kvm, pos);
+		cond_resched();
+	}
+}
+
 void sev_vm_destroy(struct kvm *kvm)
 {
 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
-	struct list_head *head = &sev->regions_list;
-	struct list_head *pos, *q;
 
 	if (!sev_guest(kvm))
 		return;
@@ -1795,13 +1809,7 @@ void sev_vm_destroy(struct kvm *kvm)
 	 * if userspace was terminated before unregistering the memory regions
 	 * then lets unpin all the registered memory.
 	 */
-	if (!list_empty(head)) {
-		list_for_each_safe(pos, q, head) {
-			__unregister_enc_region_locked(kvm,
-				list_entry(pos, struct enc_region, list));
-			cond_resched();
-		}
-	}
+	__unregister_region_list_locked(kvm, &sev->regions_list);
 
 	mutex_unlock(&kvm->lock);
 
-- 
2.32.0.288.g62a8d224e6-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/3] KVM, SEV: Add support for SEV local migration
  2021-06-21 16:31 [PATCH 0/3] Add AMD SEV and SEV-ES local migration support Peter Gonda
  2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
@ 2021-06-21 16:31 ` Peter Gonda
  2021-07-12 21:09   ` Brijesh Singh
                     ` (2 more replies)
  2021-06-21 16:31 ` [PATCH 3/3] KVM, SEV: Add support for SEV-ES " Peter Gonda
  2 siblings, 3 replies; 14+ messages in thread
From: Peter Gonda @ 2021-06-21 16:31 UTC (permalink / raw)
  To: kvm
  Cc: Peter Gonda, Lars Bull, Paolo Bonzini, Sean Christopherson,
	David Rientjes, Dr . David Alan Gilbert, Brijesh Singh,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	linux-kernel

Local migration provides a low-cost mechanism for userspace VMM upgrades.
It is an alternative to traditional (i.e., remote) live migration. Whereas
remote migration handles move a guest to a new host, local migration only
handles moving a guest to a new userspace VMM within a host.

For SEV to work with local migration, contents of the SEV info struct
such as the ASID (used to index the encryption key in the AMD SP) and
the list
of memory regions need to be transferred to the target VM. Adds
commands for sending and receiving the sev info.

To avoid exposing this internal state to userspace and prevent other
processes from importing state they shouldn't have access to, the send
returns a token to userspace that is handed off to the target VM. The
target passes in this token to receive the sent state. The token is only
valid for one-time use. Functionality on the source becomes limited
after
send has been performed. If the source is destroyed before the target
has
received, the token becomes invalid.

The target is expected to be initialized (sev_guest_init), but not
launched
state (sev_launch_start) when performing receive. Once the target has
received, it will be in a launched state and will not need to perform
the
typical SEV launch commands.

Co-developed-by: Lars Bull <larsbull@google.com>
Signed-off-by: Lars Bull <larsbull@google.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

---
 .../virt/kvm/amd-memory-encryption.rst        |  43 +++
 arch/x86/kvm/svm/sev.c                        | 270 +++++++++++++++++-
 arch/x86/kvm/svm/svm.h                        |   1 +
 include/uapi/linux/kvm.h                      |  12 +
 4 files changed, 317 insertions(+), 9 deletions(-)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 5ec8a1902e15..0f9030e3dcfe 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -427,6 +427,49 @@ issued by the hypervisor to make the guest ready for execution.
 
 Returns: 0 on success, -negative on error
 
+10. KVM_SEV_LOCAL_SEND
+----------------------------------
+
+The KVM_SEV_LOCAL_SEND command is used to stage the VM's SEV info
+for the purposes of migrating memory to a new local VM while using the same SEV
+key. If the source VM is destroyed before the staged info has been received by
+the target, the info is lost. Once the info has been staged, only commands
+KVM_SEV_DBG_DECRYPT, and KVM_SEV_DBG_ENCRYPT
+can be used by the source.
+
+Parameters (out): struct kvm_sev_local_send
+
+Returns: 0 on success, -negative on error
+
+::
+
+    struct kvm_sev_local_send {
+        __u64 info_token;    /* token referencing the staged info */
+    };
+
+11. KVM_SEV_LOCAL_RECEIVE
+-------------------------------------
+
+The KVM_SEV_LOCAL_RECEIVE command is used to transfer staged SEV
+info to a target VM from some source VM. SEV on the target VM should be active
+when receive is performed, but not yet launched and without any pinned memory.
+The launch commands should be skipped after receive because they should have
+already been performed on the source.
+
+Parameters (in/out): struct kvm_sev_local_receive
+
+Returns: 0 on success, -negative on error
+
+::
+
+    struct kvm_sev_local_receive {
+        __u64 info_token;    /* token referencing the staged info */
+        __u32 handle;        /* guest handle */
+    };
+
+On success, the 'handle' field contains the handle for this SEV guest.
+
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 5af46ff6ec48..7c33ad2b910d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -14,6 +14,7 @@
 #include <linux/psp-sev.h>
 #include <linux/pagemap.h>
 #include <linux/swap.h>
+#include <linux/random.h>
 #include <linux/misc_cgroup.h>
 #include <linux/processor.h>
 #include <linux/trace_events.h>
@@ -57,6 +58,8 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
 #define sev_es_enabled false
 #endif /* CONFIG_KVM_AMD_SEV */
 
+#define MAX_RAND_RETRY    3
+
 static u8 sev_enc_bit;
 static DECLARE_RWSEM(sev_deactivate_lock);
 static DEFINE_MUTEX(sev_bitmap_lock);
@@ -74,6 +77,22 @@ struct enc_region {
 	unsigned long size;
 };
 
+struct sev_info_migration_node {
+	struct hlist_node hnode;
+	u64 token;
+	bool valid;
+
+	unsigned int asid;
+	unsigned int handle;
+	unsigned long pages_locked;
+	struct list_head regions_list;
+	struct misc_cg *misc_cg;
+};
+
+#define SEV_INFO_MIGRATION_HASH_BITS    7
+static DEFINE_HASHTABLE(sev_info_migration_hash, SEV_INFO_MIGRATION_HASH_BITS);
+static DEFINE_SPINLOCK(sev_info_migration_hash_lock);
+
 /* Called with the sev_bitmap_lock held, or on shutdown  */
 static int sev_flush_asids(int min_asid, int max_asid)
 {
@@ -1094,6 +1113,185 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static struct sev_info_migration_node *find_migration_info(unsigned long token)
+{
+	struct sev_info_migration_node *entry;
+
+	hash_for_each_possible(sev_info_migration_hash, entry, hnode, token) {
+		if (entry->token == token)
+			return entry;
+	}
+
+	return NULL;
+}
+
+/*
+ * Places @entry into the |sev_info_migration_hash|. Returns 0 if successful
+ * and ownership of @entry is transferred to the hashmap.
+ */
+static int place_migration_node(struct sev_info_migration_node *entry)
+{
+	u64 token = 0;
+	unsigned int retries;
+	int ret = -EFAULT;
+
+	/*
+	 * Generate a token associated with this VM's SEV info that userspace
+	 * can use to import on the other side. We use 0 to indicate a not-
+	 * present token. The token cannot collide with other existing ones, so
+	 * reroll a few times until we get a valid token. In the unlikely event
+	 * we're having trouble generating a unique token, give up and let
+	 * userspace retry if it needs to.
+	 */
+	spin_lock(&sev_info_migration_hash_lock);
+	for (retries = 0; retries < MAX_RAND_RETRY; retries++)  {
+		get_random_bytes((void *)&token, sizeof(token));
+
+		if (find_migration_info(token))
+			continue;
+
+		entry->token = token;
+		entry->valid = true;
+
+		hash_add(sev_info_migration_hash, &entry->hnode, token);
+		ret = 0;
+		goto out;
+	}
+
+out:
+	spin_unlock(&sev_info_migration_hash_lock);
+	return ret;
+}
+
+static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_info_migration_node *entry;
+	struct kvm_sev_local_send params;
+	u64 token;
+	int ret = -EFAULT;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (sev->es_active)
+		return -EPERM;
+
+	if (sev->info_token != 0)
+		return -EEXIST;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			   sizeof(params)))
+		return -EFAULT;
+
+	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+	if (!entry)
+		return -ENOMEM;
+
+	entry->asid = sev->asid;
+	entry->handle = sev->handle;
+	entry->pages_locked = sev->pages_locked;
+	entry->misc_cg = sev->misc_cg;
+
+	INIT_LIST_HEAD(&entry->regions_list);
+	list_replace_init(&sev->regions_list, &entry->regions_list);
+
+	if (place_migration_node(entry))
+		goto e_listdel;
+
+	token = entry->token;
+
+	params.info_token = token;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
+			 sizeof(params)))
+		goto e_hashdel;
+
+	sev->info_token = token;
+
+	return 0;
+
+e_hashdel:
+	spin_lock(&sev_info_migration_hash_lock);
+	hash_del(&entry->hnode);
+	spin_unlock(&sev_info_migration_hash_lock);
+
+e_listdel:
+	list_replace_init(&entry->regions_list, &sev->regions_list);
+
+	kfree(entry);
+
+	return ret;
+}
+
+static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_info_migration_node *entry;
+	struct kvm_sev_local_receive params;
+	struct kvm_sev_info old_info;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (sev->es_active)
+		return -EPERM;
+
+	if (sev->handle != 0)
+		return -EPERM;
+
+	if (!list_empty(&sev->regions_list))
+		return -EPERM;
+
+	if (copy_from_user(&params,
+			   (void __user *)(uintptr_t)argp->data,
+			   sizeof(params)))
+		return -EFAULT;
+
+	spin_lock(&sev_info_migration_hash_lock);
+	entry = find_migration_info(params.info_token);
+	if (!entry || !entry->valid)
+		goto err_unlock;
+
+	memcpy(&old_info, sev, sizeof(old_info));
+
+	/*
+	 * The source VM always frees @entry On the target we simply
+	 * mark the token as invalid to notify the source the sev info
+	 * has been moved successfully.
+	 */
+	entry->valid = false;
+	sev->active = true;
+	sev->asid = entry->asid;
+	sev->handle = entry->handle;
+	sev->pages_locked = entry->pages_locked;
+	sev->misc_cg = entry->misc_cg;
+
+	INIT_LIST_HEAD(&sev->regions_list);
+	list_replace_init(&entry->regions_list, &sev->regions_list);
+
+	spin_unlock(&sev_info_migration_hash_lock);
+
+	params.handle = sev->handle;
+
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
+			 sizeof(params)))
+		goto err_unwind;
+
+	sev_asid_free(&old_info);
+	return 0;
+
+err_unwind:
+	spin_lock(&sev_info_migration_hash_lock);
+	list_replace_init(&sev->regions_list, &entry->regions_list);
+	entry->valid = true;
+	memcpy(sev, &old_info, sizeof(*sev));
+
+err_unlock:
+	spin_unlock(&sev_info_migration_hash_lock);
+
+	return -EFAULT;
+}
+
 /* Userspace wants to query session length. */
 static int
 __sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
@@ -1513,6 +1711,18 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 		goto out;
 	}
 
+	/*
+	 * If this VM has started exporting its SEV contents to another VM,
+	 * it's not allowed to do any more SEV operations that may modify the
+	 * SEV state.
+	 */
+	if (to_kvm_svm(kvm)->sev_info.info_token &&
+	    sev_cmd.id != KVM_SEV_DBG_ENCRYPT &&
+	    sev_cmd.id != KVM_SEV_DBG_DECRYPT) {
+		r = -EPERM;
+		goto out;
+	}
+
 	switch (sev_cmd.id) {
 	case KVM_SEV_ES_INIT:
 		if (!sev_es_enabled) {
@@ -1553,6 +1763,12 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_GET_ATTESTATION_REPORT:
 		r = sev_get_attestation_report(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_LOCAL_SEND:
+		r = sev_local_send(kvm, &sev_cmd);
+		break;
+	case KVM_SEV_LOCAL_RECEIVE:
+		r = sev_local_receive(kvm, &sev_cmd);
+		break;
 	case KVM_SEV_SEND_START:
 		r = sev_send_start(kvm, &sev_cmd);
 		break;
@@ -1786,6 +2002,8 @@ static void __unregister_region_list_locked(struct kvm *kvm,
 void sev_vm_destroy(struct kvm *kvm)
 {
 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_info_migration_node *entry = NULL;
+	bool info_migrated = false;
 
 	if (!sev_guest(kvm))
 		return;
@@ -1796,25 +2014,59 @@ void sev_vm_destroy(struct kvm *kvm)
 		return;
 	}
 
+	/*
+	 * If userspace has requested that we migrate the SEV info to a new VM,
+	 * then we own and must remove an entry node in the tracking data
+	 * structure. Whether we clean up the data in our SEV info struct and
+	 * entry node depends on whether userspace has done the migration,
+	 * which transfers ownership to a new VM. We can identify that
+	 * migration has occurred by checking if the node is marked invalid.
+	 */
+	if (sev->info_token != 0) {
+		spin_lock(&sev_info_migration_hash_lock);
+		entry = find_migration_info(sev->info_token);
+		if (entry) {
+			info_migrated = !entry->valid;
+			hash_del(&entry->hnode);
+		} else
+			WARN(1,
+			     "SEV VM was marked for export, but does not have associated export node.\n");
+		spin_unlock(&sev_info_migration_hash_lock);
+	}
+
 	mutex_lock(&kvm->lock);
 
 	/*
-	 * Ensure that all guest tagged cache entries are flushed before
-	 * releasing the pages back to the system for use. CLFLUSH will
-	 * not do this, so issue a WBINVD.
+	 * Adding memory regions after a local send has started
+	 * is dangerous.
 	 */
-	wbinvd_on_all_cpus();
+	if (sev->info_token != 0 && !list_empty(&sev->regions_list)) {
+		WARN(1,
+		     "Source SEV regions list non-empty after export request. List is not expected to be modified after export request.\n");
+		__unregister_region_list_locked(kvm, &sev->regions_list);
+	}
 
 	/*
-	 * if userspace was terminated before unregistering the memory regions
-	 * then lets unpin all the registered memory.
+	 * If userspace was terminated before unregistering the memory
+	 * regions then lets unpin all the registered memory.
 	 */
-	__unregister_region_list_locked(kvm, &sev->regions_list);
+	if (entry)
+		__unregister_region_list_locked(kvm, &entry->regions_list);
 
 	mutex_unlock(&kvm->lock);
 
-	sev_unbind_asid(kvm, sev->handle);
-	sev_asid_free(sev);
+	/*
+	 * Ensure that all guest tagged cache entries are flushed before
+	 * releasing the pages back to the system for use. CLFLUSH will
+	 * not do this, so issue a WBINVD.
+	 */
+	wbinvd_on_all_cpus();
+	if (!info_migrated) {
+		sev_unbind_asid(kvm, sev->handle);
+		sev_asid_free(sev);
+	}
+
+	kfree(entry);
 }
 
 void __init sev_set_cpu_caps(void)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 70419e417c0d..1ae8fe623c70 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -65,6 +65,7 @@ struct kvm_sev_info {
 	u64 ap_jump_table;	/* SEV-ES AP Jump Table address */
 	struct kvm *enc_context_owner; /* Owner of copied encryption context */
 	struct misc_cg *misc_cg; /* For misc cgroup accounting */
+	u64 info_token; /* Token for SEV info local migration */
 };
 
 struct kvm_svm {
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 79d9c44d1ad7..b317d4b2507d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1678,6 +1678,9 @@ enum sev_cmd_id {
 	KVM_SEV_GET_ATTESTATION_REPORT,
 	/* Guest Migration Extension */
 	KVM_SEV_SEND_CANCEL,
+	/* Local migration commands */
+	KVM_SEV_LOCAL_SEND,
+	KVM_SEV_LOCAL_RECEIVE,
 
 	KVM_SEV_NR_MAX,
 };
@@ -1775,6 +1778,15 @@ struct kvm_sev_receive_update_data {
 	__u32 trans_len;
 };
 
+struct kvm_sev_local_send {
+	__u64 info_token;
+};
+
+struct kvm_sev_local_receive {
+	__u64 info_token;
+	__u32 handle;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.32.0.288.g62a8d224e6-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/3] KVM, SEV: Add support for SEV-ES local migration
  2021-06-21 16:31 [PATCH 0/3] Add AMD SEV and SEV-ES local migration support Peter Gonda
  2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
  2021-06-21 16:31 ` [PATCH 2/3] KVM, SEV: Add support for SEV local migration Peter Gonda
@ 2021-06-21 16:31 ` Peter Gonda
  2021-07-13 18:41   ` Brijesh Singh
                     ` (2 more replies)
  2 siblings, 3 replies; 14+ messages in thread
From: Peter Gonda @ 2021-06-21 16:31 UTC (permalink / raw)
  To: kvm
  Cc: Peter Gonda, Paolo Bonzini, Sean Christopherson, David Rientjes,
	Dr . David Alan Gilbert, Brijesh Singh, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

Local migration provides a low-cost mechanism for userspace VMM upgrades.
It is an alternative to traditional (i.e., remote) live migration. Whereas
remote migration handles move a guest to a new host, local migration only
handles moving a guest to a new userspace VMM within a host.

For SEV-ES to work with local migration the VMSAs, GHCB metadata,
and other SEV-ES info needs to be preserved along with the guest's
memory. KVM maintains a pointer to each vCPUs GHCB and may additionally
contain an copy of the GHCB's save area if the guest has been using it
for NAE handling. The local send and receive ioctls have been updated to
move this additional metadata required for each vCPU in SEV-ES into
hashmap for SEV local migration data.

Signed-off-by: Peter Gonda <pgonda@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

---
 arch/x86/kvm/svm/sev.c | 164 +++++++++++++++++++++++++++++++++++++----
 1 file changed, 150 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7c33ad2b910d..33df7ed08d21 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -77,6 +77,19 @@ struct enc_region {
 	unsigned long size;
 };
 
+struct vmsa_node {
+	struct list_head list;
+	int vcpu_id;
+	struct vmcb_save_area *vmsa;
+	struct ghcb *ghcb;
+	u64 ghcb_gpa;
+
+	void *ghcb_sa;
+	u64 ghcb_sa_len;
+	bool ghcb_sa_sync;
+	bool ghcb_sa_free;
+};
+
 struct sev_info_migration_node {
 	struct hlist_node hnode;
 	u64 token;
@@ -87,6 +100,11 @@ struct sev_info_migration_node {
 	unsigned long pages_locked;
 	struct list_head regions_list;
 	struct misc_cg *misc_cg;
+
+	/* The following fields are for SEV-ES guests */
+	bool es_enabled;
+	struct list_head vmsa_list;
+	u64 ap_jump_table;
 };
 
 #define SEV_INFO_MIGRATION_HASH_BITS    7
@@ -1163,6 +1181,94 @@ static int place_migration_node(struct sev_info_migration_node *entry)
 	return ret;
 }
 
+static int process_vmsa_list(struct kvm *kvm, struct list_head *vmsa_list)
+{
+	struct vmsa_node *vmsa_node, *q;
+	struct kvm_vcpu *vcpu;
+	struct vcpu_svm *svm;
+
+	lockdep_assert_held(&kvm->lock);
+
+	if (!vmsa_list)
+		return 0;
+
+	list_for_each_entry(vmsa_node, vmsa_list, list) {
+		if (!kvm_get_vcpu_by_id(kvm, vmsa_node->vcpu_id)) {
+			WARN(1,
+			     "Failed to find VCPU with ID %d despite presence in VMSA list.\n",
+			     vmsa_node->vcpu_id);
+			return -1;
+		}
+	}
+
+	/*
+	 * Move any stashed VMSAs back to their respective VMCBs and delete
+	 * those nodes.
+	 */
+	list_for_each_entry_safe(vmsa_node, q, vmsa_list, list) {
+		vcpu = kvm_get_vcpu_by_id(kvm, vmsa_node->vcpu_id);
+		svm = to_svm(vcpu);
+		svm->vmsa = vmsa_node->vmsa;
+		svm->ghcb = vmsa_node->ghcb;
+		svm->vmcb->control.ghcb_gpa = vmsa_node->ghcb_gpa;
+		svm->vcpu.arch.guest_state_protected = true;
+		svm->vmcb->control.vmsa_pa = __pa(svm->vmsa);
+		svm->ghcb_sa = vmsa_node->ghcb_sa;
+		svm->ghcb_sa_len = vmsa_node->ghcb_sa_len;
+		svm->ghcb_sa_sync = vmsa_node->ghcb_sa_sync;
+		svm->ghcb_sa_free = vmsa_node->ghcb_sa_free;
+
+		list_del(&vmsa_node->list);
+		kfree(vmsa_node);
+	}
+
+	return 0;
+}
+
+static int create_vmsa_list(struct kvm *kvm,
+			    struct sev_info_migration_node *entry)
+{
+	int i;
+	const int num_vcpus = atomic_read(&kvm->online_vcpus);
+	struct vmsa_node *node;
+	struct kvm_vcpu *vcpu;
+	struct vcpu_svm *svm;
+
+	INIT_LIST_HEAD(&entry->vmsa_list);
+	for (i = 0; i < num_vcpus; ++i) {
+		node = kzalloc(sizeof(*node), GFP_KERNEL);
+		if (!node)
+			goto e_freelist;
+
+		vcpu = kvm->vcpus[i];
+		node->vcpu_id = vcpu->vcpu_id;
+
+		svm = to_svm(vcpu);
+		node->vmsa = svm->vmsa;
+		svm->vmsa = NULL;
+		node->ghcb = svm->ghcb;
+		svm->ghcb = NULL;
+		node->ghcb_gpa = svm->vmcb->control.ghcb_gpa;
+		node->ghcb_sa = svm->ghcb_sa;
+		svm->ghcb_sa = NULL;
+		node->ghcb_sa_len = svm->ghcb_sa_len;
+		svm->ghcb_sa_len = 0;
+		node->ghcb_sa_sync = svm->ghcb_sa_sync;
+		svm->ghcb_sa_sync = false;
+		node->ghcb_sa_free = svm->ghcb_sa_free;
+		svm->ghcb_sa_free = false;
+
+		list_add_tail(&node->list, &entry->vmsa_list);
+	}
+
+	return 0;
+
+e_freelist:
+	if (process_vmsa_list(kvm, &entry->vmsa_list))
+		WARN(1, "Unable to move VMSA list back to source VM. Guest is in a broken state now.");
+	return -1;
+}
+
 static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
 {
 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
@@ -1174,9 +1280,6 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	if (!sev_guest(kvm))
 		return -ENOTTY;
 
-	if (sev->es_active)
-		return -EPERM;
-
 	if (sev->info_token != 0)
 		return -EEXIST;
 
@@ -1196,8 +1299,19 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	INIT_LIST_HEAD(&entry->regions_list);
 	list_replace_init(&sev->regions_list, &entry->regions_list);
 
+	if (sev_es_guest(kvm)) {
+		/*
+		 * If this is an ES guest, we need to move each VMCB's VMSA into a
+		 * list for migration.
+		 */
+		entry->es_enabled = true;
+		entry->ap_jump_table = sev->ap_jump_table;
+		if (create_vmsa_list(kvm, entry))
+			goto e_listdel;
+	}
+
 	if (place_migration_node(entry))
-		goto e_listdel;
+		goto e_vmsadel;
 
 	token = entry->token;
 
@@ -1215,6 +1329,11 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	hash_del(&entry->hnode);
 	spin_unlock(&sev_info_migration_hash_lock);
 
+e_vmsadel:
+	if (sev_es_guest(kvm) && process_vmsa_list(kvm, &entry->vmsa_list))
+		WARN(1,
+		     "Unable to move VMSA list back to source VM. Guest is in a broken state now.");
+
 e_listdel:
 	list_replace_init(&entry->regions_list, &sev->regions_list);
 
@@ -1233,9 +1352,6 @@ static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	if (!sev_guest(kvm))
 		return -ENOTTY;
 
-	if (sev->es_active)
-		return -EPERM;
-
 	if (sev->handle != 0)
 		return -EPERM;
 
@@ -1254,6 +1370,14 @@ static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
 
 	memcpy(&old_info, sev, sizeof(old_info));
 
+	if (entry->es_enabled) {
+		if (process_vmsa_list(kvm, &entry->vmsa_list))
+			goto err_unlock;
+
+		sev->es_active = true;
+		sev->ap_jump_table = entry->ap_jump_table;
+	}
+
 	/*
 	 * The source VM always frees @entry On the target we simply
 	 * mark the token as invalid to notify the source the sev info
@@ -2046,12 +2170,22 @@ void sev_vm_destroy(struct kvm *kvm)
 		__unregister_region_list_locked(kvm, &sev->regions_list);
 	}
 
-	/*
-	 * If userspace was terminated before unregistering the memory
-	 * regions then lets unpin all the registered memory.
-	 */
-	if (entry)
+	if (entry) {
+		/*
+		 * If there are any saved VMSAs, restore them so they can be
+		 * destructed through the normal path.
+		 */
+		if (entry->es_enabled)
+			if (process_vmsa_list(kvm, &entry->vmsa_list))
+				WARN(1,
+				     "Unable to clean up vmsa_list");
+
+		/*
+		 * If userspace was terminated before unregistering the memory
+		 * regions then lets unpin all the registered memory.
+		 */
 		__unregister_region_list_locked(kvm, &entry->regions_list);
+	}
 
 	mutex_unlock(&kvm->lock);
 
@@ -2243,9 +2377,11 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
 
 	svm = to_svm(vcpu);
 
-	if (vcpu->arch.guest_state_protected)
+	if (svm->ghcb && vcpu->arch.guest_state_protected)
 		sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE);
-	__free_page(virt_to_page(svm->vmsa));
+
+	if (svm->vmsa)
+		__free_page(virt_to_page(svm->vmsa));
 
 	if (svm->ghcb_sa_free)
 		kfree(svm->ghcb_sa);
-- 
2.32.0.288.g62a8d224e6-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions
  2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
@ 2021-07-12 20:05   ` Brijesh Singh
  2021-07-13 21:40   ` Sean Christopherson
  2021-07-13 23:18   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Brijesh Singh @ 2021-07-12 20:05 UTC (permalink / raw)
  To: Peter Gonda, kvm
  Cc: brijesh.singh, Paolo Bonzini, Sean Christopherson,
	David Rientjes, Dr . David Alan Gilbert, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel



On 6/21/21 11:31 AM, Peter Gonda wrote:
> Factor out helper function for freeing the encrypted region list.
> 
> Signed-off-by: Peter Gonda <pgonda@google.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> 

Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>

thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] KVM, SEV: Add support for SEV local migration
  2021-06-21 16:31 ` [PATCH 2/3] KVM, SEV: Add support for SEV local migration Peter Gonda
@ 2021-07-12 21:09   ` Brijesh Singh
  2021-07-13 20:12     ` Peter Gonda
  2021-07-13 22:12   ` Sean Christopherson
  2021-07-13 23:24   ` Marc Orr
  2 siblings, 1 reply; 14+ messages in thread
From: Brijesh Singh @ 2021-07-12 21:09 UTC (permalink / raw)
  To: Peter Gonda, kvm
  Cc: brijesh.singh, Lars Bull, Paolo Bonzini, Sean Christopherson,
	David Rientjes, Dr . David Alan Gilbert, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel



On 6/21/21 11:31 AM, Peter Gonda wrote:

> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	if (sev->es_active)
> +		return -EPERM;
> +
> +	if (sev->info_token != 0)
> +		return -EEXIST;
> +
> +	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> +			   sizeof(params)))
> +		return -EFAULT;
> +
> +	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> +	if (!entry)
> +		return -ENOMEM;
> +
> +	entry->asid = sev->asid;
> +	entry->handle = sev->handle;
> +	entry->pages_locked = sev->pages_locked;
> +	entry->misc_cg = sev->misc_cg;
> +
> +	INIT_LIST_HEAD(&entry->regions_list);
> +	list_replace_init(&sev->regions_list, &entry->regions_list);

I believe the entry->regions_list will be NULL if the command is called 
before the memory regions are registered. The quesiton is, do you need 
to check whether for a valid sev->handle (i.e, LAUNCH_START is done)?


> +
>   /* Userspace wants to query session length. */
>   static int
>   __sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
> @@ -1513,6 +1711,18 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>   		goto out;
>   	}
>   
> +	/*
> +	 * If this VM has started exporting its SEV contents to another VM,
> +	 * it's not allowed to do any more SEV operations that may modify the
> +	 * SEV state.
> +	 */
> +	if (to_kvm_svm(kvm)->sev_info.info_token &&
> +	    sev_cmd.id != KVM_SEV_DBG_ENCRYPT &&
> +	    sev_cmd.id != KVM_SEV_DBG_DECRYPT) {
> +		r = -EPERM;
> +		goto out;
> +	}

Maybe move this check in a function so that it can later extended for 
SEV-SNP (cmd ids for the debug is different).

Something like:

static bool is_local_mig_active(struct kvm *)
{
	....
}

Once the migration range hypercall is merged, we also need to preserve 
any metadata memory maintained by KVM for the unencrypted ranges.

-Brijesh

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] KVM, SEV: Add support for SEV-ES local migration
  2021-06-21 16:31 ` [PATCH 3/3] KVM, SEV: Add support for SEV-ES " Peter Gonda
@ 2021-07-13 18:41   ` Brijesh Singh
  2021-07-13 22:21   ` Sean Christopherson
  2021-07-13 23:25   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Brijesh Singh @ 2021-07-13 18:41 UTC (permalink / raw)
  To: Peter Gonda, kvm
  Cc: brijesh.singh, Paolo Bonzini, Sean Christopherson,
	David Rientjes, Dr . David Alan Gilbert, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel



On 6/21/21 11:31 AM, Peter Gonda wrote:

> @@ -1196,8 +1299,19 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>   	INIT_LIST_HEAD(&entry->regions_list);
>   	list_replace_init(&sev->regions_list, &entry->regions_list);
>   
> +	if (sev_es_guest(kvm)) {
> +		/*
> +		 * If this is an ES guest, we need to move each VMCB's VMSA into a
> +		 * list for migration.
> +		 */
> +		entry->es_enabled = true;
> +		entry->ap_jump_table = sev->ap_jump_table;
> +		if (create_vmsa_list(kvm, entry))
> +			goto e_listdel;
> +	}
> +

the patch looks good. Similar to the previous patch, do we need to check 
for the SEV guest state >= LAUNCH_UPDATE to be sure that VMSA's are 
encrypted before we go about sharing it with the new VMM ?

-Brijesh

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] KVM, SEV: Add support for SEV local migration
  2021-07-12 21:09   ` Brijesh Singh
@ 2021-07-13 20:12     ` Peter Gonda
  0 siblings, 0 replies; 14+ messages in thread
From: Peter Gonda @ 2021-07-13 20:12 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: kvm list, Lars Bull, Paolo Bonzini, Sean Christopherson,
	David Rientjes, Dr . David Alan Gilbert, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

Thanks for reviewing Brijesh! Seanjc@ said he would comment so I'll
lump your suggestions and his into the V2.

On Mon, Jul 12, 2021 at 3:09 PM Brijesh Singh <brijesh.singh@amd.com> wrote:
>
>
>
> On 6/21/21 11:31 AM, Peter Gonda wrote:
>
> > +     if (!sev_guest(kvm))
> > +             return -ENOTTY;
> > +
> > +     if (sev->es_active)
> > +             return -EPERM;
> > +
> > +     if (sev->info_token != 0)
> > +             return -EEXIST;
> > +
> > +     if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> > +                        sizeof(params)))
> > +             return -EFAULT;
> > +
> > +     entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> > +     if (!entry)
> > +             return -ENOMEM;
> > +
> > +     entry->asid = sev->asid;
> > +     entry->handle = sev->handle;
> > +     entry->pages_locked = sev->pages_locked;
> > +     entry->misc_cg = sev->misc_cg;
> > +
> > +     INIT_LIST_HEAD(&entry->regions_list);
> > +     list_replace_init(&sev->regions_list, &entry->regions_list);
>
> I believe the entry->regions_list will be NULL if the command is called
> before the memory regions are registered. The quesiton is, do you need
> to check whether for a valid sev->handle (i.e, LAUNCH_START is done)?

Makes sense to add a check for LAUNCH_START by checking sev->handle,
I'll add that in V2.

Would it also make sense to add similar checks to ioctls like launch
update, measure, and finish? If so I can send a separate patch to add
those checks.

>
>
> > +
> >   /* Userspace wants to query session length. */
> >   static int
> >   __sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
> > @@ -1513,6 +1711,18 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
> >               goto out;
> >       }
> >
> > +     /*
> > +      * If this VM has started exporting its SEV contents to another VM,
> > +      * it's not allowed to do any more SEV operations that may modify the
> > +      * SEV state.
> > +      */
> > +     if (to_kvm_svm(kvm)->sev_info.info_token &&
> > +         sev_cmd.id != KVM_SEV_DBG_ENCRYPT &&
> > +         sev_cmd.id != KVM_SEV_DBG_DECRYPT) {
> > +             r = -EPERM;
> > +             goto out;
> > +     }
>
> Maybe move this check in a function so that it can later extended for
> SEV-SNP (cmd ids for the debug is different).
>
> Something like:
>
> static bool is_local_mig_active(struct kvm *)
> {
>         ....
> }

Will do!

>
> Once the migration range hypercall is merged, we also need to preserve
> any metadata memory maintained by KVM for the unencrypted ranges.

OK. Any suggestions on how to manage these impending conflicts. Are
those almost ready and I should build these patches on top of those or
what would you suggest?

>
> -Brijesh

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions
  2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
  2021-07-12 20:05   ` Brijesh Singh
@ 2021-07-13 21:40   ` Sean Christopherson
  2021-07-13 23:18   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2021-07-13 21:40 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, Paolo Bonzini, David Rientjes, Dr . David Alan Gilbert,
	Brijesh Singh, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, linux-kernel

On Mon, Jun 21, 2021, Peter Gonda wrote:
> Factor out helper function for freeing the encrypted region list.

...

>  arch/x86/kvm/svm/sev.c | 26 +++++++++++++++++---------
>  1 file changed, 17 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 46e339c84998..5af46ff6ec48 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1767,11 +1767,25 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
>  	return ret;
>  }
>  
> +static void __unregister_region_list_locked(struct kvm *kvm,
> +					    struct list_head *mem_regions)

I don't think the underscores or the "locked" qualifier are necessary.  Unlike
__unregister_enc_region_locked(), there is no unregister_region_list() to avoid.

I'd also votes to drop "list" and instead use a plural "regions".  Without the
plural form, it's not immediately obvious that the difference is that this
helper deletes multiple regions.

Last nit, I assume these are all encrypted regions?  If so, unregister_enc_regions()
seems like the natural choice.

> +{
> +	struct enc_region *pos, *q;
> +
> +	lockdep_assert_held(&kvm->lock);

This locked (big thumbs up) is part of why I think it's a-ok to drop the "locked"
qualifier.

> +
> +	if (list_empty(mem_regions))
> +		return;
> +
> +	list_for_each_entry_safe(pos, q, mem_regions, list) {
> +		__unregister_enc_region_locked(kvm, pos);
> +		cond_resched();
> +	}
> +}

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] KVM, SEV: Add support for SEV local migration
  2021-06-21 16:31 ` [PATCH 2/3] KVM, SEV: Add support for SEV local migration Peter Gonda
  2021-07-12 21:09   ` Brijesh Singh
@ 2021-07-13 22:12   ` Sean Christopherson
  2021-07-13 23:24   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2021-07-13 22:12 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, Lars Bull, Paolo Bonzini, David Rientjes,
	Dr . David Alan Gilbert, Brijesh Singh, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

On Mon, Jun 21, 2021, Peter Gonda wrote:
> Local migration provides a low-cost mechanism for userspace VMM upgrades.
> It is an alternative to traditional (i.e., remote) live migration. Whereas
> remote migration handles move a guest to a new host, local migration only
> handles moving a guest to a new userspace VMM within a host.

Maybe use intra-host vs. inter-host instead of local vs. remote?  That'll save
having to define local and remote.  KVM_SEV_INTRA_HOST_{SEND,RECEIVE} is a bit
wordy, but also very specific.

> For SEV to work with local migration, contents of the SEV info struct
> such as the ASID (used to index the encryption key in the AMD SP) and
> the list
> of memory regions need to be transferred to the target VM. Adds
> commands for sending and receiving the sev info.
> 
> To avoid exposing this internal state to userspace and prevent other
> processes from importing state they shouldn't have access to, the send
> returns a token to userspace that is handed off to the target VM. The
> target passes in this token to receive the sent state. The token is only
> valid for one-time use. Functionality on the source becomes limited
> after
> send has been performed. If the source is destroyed before the target
> has
> received, the token becomes invalid.

Something appears to be mangling the changelogs, or maybe you have a cat that
likes stepping on the Enter key? :-D

> The target is expected to be initialized (sev_guest_init), but not
> launched
> state (sev_launch_start) when performing receive. Once the target has
> received, it will be in a launched state and will not need to perform
> the
> typical SEV launch commands.

...

> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 5af46ff6ec48..7c33ad2b910d 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -14,6 +14,7 @@
>  #include <linux/psp-sev.h>
>  #include <linux/pagemap.h>
>  #include <linux/swap.h>
> +#include <linux/random.h>
>  #include <linux/misc_cgroup.h>
>  #include <linux/processor.h>
>  #include <linux/trace_events.h>
> @@ -57,6 +58,8 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
>  #define sev_es_enabled false
>  #endif /* CONFIG_KVM_AMD_SEV */
>  
> +#define MAX_RAND_RETRY    3
> +
>  static u8 sev_enc_bit;
>  static DECLARE_RWSEM(sev_deactivate_lock);
>  static DEFINE_MUTEX(sev_bitmap_lock);
> @@ -74,6 +77,22 @@ struct enc_region {
>  	unsigned long size;
>  };
>  
> +struct sev_info_migration_node {
> +	struct hlist_node hnode;
> +	u64 token;
> +	bool valid;
> +
> +	unsigned int asid;
> +	unsigned int handle;
> +	unsigned long pages_locked;
> +	struct list_head regions_list;
> +	struct misc_cg *misc_cg;
> +};
> +
> +#define SEV_INFO_MIGRATION_HASH_BITS    7
> +static DEFINE_HASHTABLE(sev_info_migration_hash, SEV_INFO_MIGRATION_HASH_BITS);
> +static DEFINE_SPINLOCK(sev_info_migration_hash_lock);
> +
>  /* Called with the sev_bitmap_lock held, or on shutdown  */
>  static int sev_flush_asids(int min_asid, int max_asid)
>  {
> @@ -1094,6 +1113,185 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	return ret;
>  }
>  
> +static struct sev_info_migration_node *find_migration_info(unsigned long token)
> +{
> +	struct sev_info_migration_node *entry;
> +
> +	hash_for_each_possible(sev_info_migration_hash, entry, hnode, token) {
> +		if (entry->token == token)
> +			return entry;
> +	}
> +
> +	return NULL;
> +}
> +
> +/*
> + * Places @entry into the |sev_info_migration_hash|. Returns 0 if successful
> + * and ownership of @entry is transferred to the hashmap.
> + */
> +static int place_migration_node(struct sev_info_migration_node *entry)
> +{
> +	u64 token = 0;
> +	unsigned int retries;
> +	int ret = -EFAULT;
> +
> +	/*
> +	 * Generate a token associated with this VM's SEV info that userspace
> +	 * can use to import on the other side. We use 0 to indicate a not-
> +	 * present token. The token cannot collide with other existing ones, so
> +	 * reroll a few times until we get a valid token. In the unlikely event
> +	 * we're having trouble generating a unique token, give up and let
> +	 * userspace retry if it needs to.
> +	 */
> +	spin_lock(&sev_info_migration_hash_lock);
> +	for (retries = 0; retries < MAX_RAND_RETRY; retries++)  {
> +		get_random_bytes((void *)&token, sizeof(token));

Why is the kernel responsible for generating the token?  IIUC, the purpose of
the random generation is to make the token difficult to guess by a process other
than the intended recipient, e.g. by a malicious process.  But that's a userspace
problem that can be better solved by the sender.

> +
> +		if (find_migration_info(token))
> +			continue;
> +
> +		entry->token = token;
> +		entry->valid = true;
> +
> +		hash_add(sev_info_migration_hash, &entry->hnode, token);
> +		ret = 0;
> +		goto out;
> +	}
> +
> +out:
> +	spin_unlock(&sev_info_migration_hash_lock);
> +	return ret;
> +}
> +
> +static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_info_migration_node *entry;
> +	struct kvm_sev_local_send params;
> +	u64 token;
> +	int ret = -EFAULT;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	if (sev->es_active)
> +		return -EPERM;
> +
> +	if (sev->info_token != 0)
> +		return -EEXIST;
> +
> +	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> +			   sizeof(params)))
> +		return -EFAULT;
> +
> +	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> +	if (!entry)
> +		return -ENOMEM;
> +
> +	entry->asid = sev->asid;
> +	entry->handle = sev->handle;
> +	entry->pages_locked = sev->pages_locked;
> +	entry->misc_cg = sev->misc_cg;
> +
> +	INIT_LIST_HEAD(&entry->regions_list);
> +	list_replace_init(&sev->regions_list, &entry->regions_list);
> +
> +	if (place_migration_node(entry))
> +		goto e_listdel;
> +
> +	token = entry->token;
> +
> +	params.info_token = token;
> +	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
> +			 sizeof(params)))
> +		goto e_hashdel;
> +
> +	sev->info_token = token;
> +
> +	return 0;
> +
> +e_hashdel:

err_<name> is the more standard label for this sort of thing.

> +	spin_lock(&sev_info_migration_hash_lock);
> +	hash_del(&entry->hnode);
> +	spin_unlock(&sev_info_migration_hash_lock);
> +
> +e_listdel:

listdel is a bit of an odd name, though I can't think of a better one.

> +	list_replace_init(&entry->regions_list, &sev->regions_list);
> +
> +	kfree(entry);
> +
> +	return ret;
> +}
> +
> +static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_info_migration_node *entry;
> +	struct kvm_sev_local_receive params;
> +	struct kvm_sev_info old_info;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	if (sev->es_active)
> +		return -EPERM;
> +
> +	if (sev->handle != 0)
> +		return -EPERM;
> +
> +	if (!list_empty(&sev->regions_list))
> +		return -EPERM;
> +
> +	if (copy_from_user(&params,
> +			   (void __user *)(uintptr_t)argp->data,

If you capture argp in a local var, this ugly cast can be done once and you'll
save lines overall, e.g.

	void __user *udata = (void __user *)(uintptr_t)argp->data;

	if (copy_from_user(&params, udata, sizeof(params))
		return -EFAULT;

	...

	if (copy_to_user(udata, &params, sizeof(params)))
		return -EFAULT;


> +			   sizeof(params)))
> +		return -EFAULT;
> +
> +	spin_lock(&sev_info_migration_hash_lock);
> +	entry = find_migration_info(params.info_token);
> +	if (!entry || !entry->valid)
> +		goto err_unlock;
> +
> +	memcpy(&old_info, sev, sizeof(old_info));
> +
> +	/*
> +	 * The source VM always frees @entry On the target we simply
> +	 * mark the token as invalid to notify the source the sev info
> +	 * has been moved successfully.
> +	 */
> +	entry->valid = false;
> +	sev->active = true;
> +	sev->asid = entry->asid;
> +	sev->handle = entry->handle;
> +	sev->pages_locked = entry->pages_locked;
> +	sev->misc_cg = entry->misc_cg;
> +
> +	INIT_LIST_HEAD(&sev->regions_list);
> +	list_replace_init(&entry->regions_list, &sev->regions_list);
> +
> +	spin_unlock(&sev_info_migration_hash_lock);
> +
> +	params.handle = sev->handle;
> +
> +	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
> +			 sizeof(params)))
> +		goto err_unwind;
> +
> +	sev_asid_free(&old_info);
> +	return 0;
> +
> +err_unwind:
> +	spin_lock(&sev_info_migration_hash_lock);

Why does the lock need to be reacquired, and can anything go sideways if something
else grabbed the lock while it was dropped?

> +	list_replace_init(&sev->regions_list, &entry->regions_list);
> +	entry->valid = true;
> +	memcpy(sev, &old_info, sizeof(*sev));
> +
> +err_unlock:
> +	spin_unlock(&sev_info_migration_hash_lock);
> +
> +	return -EFAULT;
> +}
> +

...

> @@ -1553,6 +1763,12 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  	case KVM_SEV_GET_ATTESTATION_REPORT:
>  		r = sev_get_attestation_report(kvm, &sev_cmd);
>  		break;
> +	case KVM_SEV_LOCAL_SEND:



> +		r = sev_local_send(kvm, &sev_cmd);
> +		break;
> +	case KVM_SEV_LOCAL_RECEIVE:
> +		r = sev_local_receive(kvm, &sev_cmd);
> +		break;
>  	case KVM_SEV_SEND_START:
>  		r = sev_send_start(kvm, &sev_cmd);
>  		break;
> @@ -1786,6 +2002,8 @@ static void __unregister_region_list_locked(struct kvm *kvm,
>  void sev_vm_destroy(struct kvm *kvm)
>  {
>  	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_info_migration_node *entry = NULL;
> +	bool info_migrated = false;
>  
>  	if (!sev_guest(kvm))
>  		return;
> @@ -1796,25 +2014,59 @@ void sev_vm_destroy(struct kvm *kvm)
>  		return;
>  	}
>  
> +	/*
> +	 * If userspace has requested that we migrate the SEV info to a new VM,
> +	 * then we own and must remove an entry node in the tracking data
> +	 * structure. Whether we clean up the data in our SEV info struct and
> +	 * entry node depends on whether userspace has done the migration,
> +	 * which transfers ownership to a new VM. We can identify that
> +	 * migration has occurred by checking if the node is marked invalid.
> +	 */
> +	if (sev->info_token != 0) {
> +		spin_lock(&sev_info_migration_hash_lock);
> +		entry = find_migration_info(sev->info_token);
> +		if (entry) {
> +			info_migrated = !entry->valid;
> +			hash_del(&entry->hnode);

Isn't info_migrated unnecessary?  Grabbing ->valid under the lock is a bit
misleading because it's unnecessary, e.g. once the entry is deleted from the
list then this flow owns it.

If you do s/entry/migration_entry (or mig_entry), then I think the code will be
sufficiently self-documenting.

> +		} else

Needs curly braces.

> +			WARN(1,
> +			     "SEV VM was marked for export, but does not have associated export node.\n");

But an even better way to write this (IMO the msg isn't necessary, the issue is
quite obvious at a quick glance):

		if (!WARN_ON(!entry))
			hash_del(&entry->node);

> +		spin_unlock(&sev_info_migration_hash_lock);
> +	}
> +
>  	mutex_lock(&kvm->lock);
>  
>  	/*
> -	 * Ensure that all guest tagged cache entries are flushed before
> -	 * releasing the pages back to the system for use. CLFLUSH will
> -	 * not do this, so issue a WBINVD.
> +	 * Adding memory regions after a local send has started
> +	 * is dangerous.
>  	 */
> -	wbinvd_on_all_cpus();
> +	if (sev->info_token != 0 && !list_empty(&sev->regions_list)) {

Kernel style usually omits the "!= 0".

> +		WARN(1,

Similarly, WARN(1, ...) in an if statement is usually a sign that you're doing
things backwards:

	if (WARN_ON(sev->info_token && !list_empty(&sev->regions_list)))
		unregister_enc_regions(kvm, &sev->regions_list);

In addition to saving code, the WARN will display the failing condition, which
obviates the need for a free form message in most cases (including this one).

Oh, and I think you've got a bug here.  If info_token is '0', won't regions_list
be leaked?  I.e. shouldn't this be (the helper gracefully handles the empty case):

	WARN_ON(sev->info_token && !list_empty(&sev->regions_list));
	unregister_enc_regions(kvm, &sev->regions_list);

That will generate a smaller diff, since the exiting call for regions_list will
be unchanged.

> +		     "Source SEV regions list non-empty after export request. List is not expected to be modified after export request.\n");
> +		__unregister_region_list_locked(kvm, &sev->regions_list);
> +	}
>  
>  	/*
> -	 * if userspace was terminated before unregistering the memory regions
> -	 * then lets unpin all the registered memory.
> +	 * If userspace was terminated before unregistering the memory

Unnecessary new newline.  That said, this comment also appears to be stale?

> +	 * regions then lets unpin all the registered memory.
>  	 */
> -	__unregister_region_list_locked(kvm, &sev->regions_list);
> +	if (entry)
> +		__unregister_region_list_locked(kvm, &entry->regions_list);
>  
>  	mutex_unlock(&kvm->lock);
>  
> -	sev_unbind_asid(kvm, sev->handle);
> -	sev_asid_free(sev);
> +	/*
> +	 * Ensure that all guest tagged cache entries are flushed before
> +	 * releasing the pages back to the system for use. CLFLUSH will
> +	 * not do this, so issue a WBINVD.
> +	 */
> +	wbinvd_on_all_cpus();
> +	if (!info_migrated) {

As above, this can be:

	if (!migration_entry || !migration_entry->valid) {

> +		sev_unbind_asid(kvm, sev->handle);
> +		sev_asid_free(sev);
> +	}
> +
> +	kfree(entry);
>  }

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] KVM, SEV: Add support for SEV-ES local migration
  2021-06-21 16:31 ` [PATCH 3/3] KVM, SEV: Add support for SEV-ES " Peter Gonda
  2021-07-13 18:41   ` Brijesh Singh
@ 2021-07-13 22:21   ` Sean Christopherson
  2021-07-13 23:25   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2021-07-13 22:21 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, Paolo Bonzini, David Rientjes, Dr . David Alan Gilbert,
	Brijesh Singh, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, linux-kernel

On Mon, Jun 21, 2021, Peter Gonda wrote:
> +static int process_vmsa_list(struct kvm *kvm, struct list_head *vmsa_list)
> +{
> +	struct vmsa_node *vmsa_node, *q;
> +	struct kvm_vcpu *vcpu;
> +	struct vcpu_svm *svm;
> +
> +	lockdep_assert_held(&kvm->lock);
> +
> +	if (!vmsa_list)

This is pointless, all callers pass in a list, i.e. it's mandatory.

> +		return 0;
> +
> +	list_for_each_entry(vmsa_node, vmsa_list, list) {
> +		if (!kvm_get_vcpu_by_id(kvm, vmsa_node->vcpu_id)) {
> +			WARN(1,
> +			     "Failed to find VCPU with ID %d despite presence in VMSA list.\n",
> +			     vmsa_node->vcpu_id);
> +			return -1;
> +		}
> +	}
> +
> +	/*
> +	 * Move any stashed VMSAs back to their respective VMCBs and delete
> +	 * those nodes.
> +	 */
> +	list_for_each_entry_safe(vmsa_node, q, vmsa_list, list) {
> +		vcpu = kvm_get_vcpu_by_id(kvm, vmsa_node->vcpu_id);

Barring a KVM bug, is it even theoretically possible for vcpu to be NULL?  If not,
I'd simply drop the above sanity check.  If this can only be true if there's a
KVM bug and you really want to keep it the WARN, just do:

		if (WARN_ON(!vcpu))
			continue;

since a KVM bug this egregious means all bets are off anyways.  That should also
allow you to make this a void returning helper and avoid pointless checking.

> +		svm = to_svm(vcpu);
> +		svm->vmsa = vmsa_node->vmsa;
> +		svm->ghcb = vmsa_node->ghcb;
> +		svm->vmcb->control.ghcb_gpa = vmsa_node->ghcb_gpa;
> +		svm->vcpu.arch.guest_state_protected = true;
> +		svm->vmcb->control.vmsa_pa = __pa(svm->vmsa);
> +		svm->ghcb_sa = vmsa_node->ghcb_sa;
> +		svm->ghcb_sa_len = vmsa_node->ghcb_sa_len;
> +		svm->ghcb_sa_sync = vmsa_node->ghcb_sa_sync;
> +		svm->ghcb_sa_free = vmsa_node->ghcb_sa_free;
> +
> +		list_del(&vmsa_node->list);
> +		kfree(vmsa_node);
> +	}
> +
> +	return 0;
> +}
> +
> +static int create_vmsa_list(struct kvm *kvm,
> +			    struct sev_info_migration_node *entry)
> +{
> +	int i;
> +	const int num_vcpus = atomic_read(&kvm->online_vcpus);
> +	struct vmsa_node *node;
> +	struct kvm_vcpu *vcpu;
> +	struct vcpu_svm *svm;
> +
> +	INIT_LIST_HEAD(&entry->vmsa_list);
> +	for (i = 0; i < num_vcpus; ++i) {
> +		node = kzalloc(sizeof(*node), GFP_KERNEL);
> +		if (!node)
> +			goto e_freelist;
> +
> +		vcpu = kvm->vcpus[i];
> +		node->vcpu_id = vcpu->vcpu_id;
> +
> +		svm = to_svm(vcpu);
> +		node->vmsa = svm->vmsa;
> +		svm->vmsa = NULL;
> +		node->ghcb = svm->ghcb;
> +		svm->ghcb = NULL;
> +		node->ghcb_gpa = svm->vmcb->control.ghcb_gpa;
> +		node->ghcb_sa = svm->ghcb_sa;
> +		svm->ghcb_sa = NULL;
> +		node->ghcb_sa_len = svm->ghcb_sa_len;
> +		svm->ghcb_sa_len = 0;
> +		node->ghcb_sa_sync = svm->ghcb_sa_sync;
> +		svm->ghcb_sa_sync = false;
> +		node->ghcb_sa_free = svm->ghcb_sa_free;
> +		svm->ghcb_sa_free = false;
> +
> +		list_add_tail(&node->list, &entry->vmsa_list);
> +	}
> +
> +	return 0;
> +
> +e_freelist:
> +	if (process_vmsa_list(kvm, &entry->vmsa_list))
> +		WARN(1, "Unable to move VMSA list back to source VM. Guest is in a broken state now.");

Same comments about err_freelist and using WARN_ON().  Though if process_vmsa_list()
can't return an error, this goes away entirely.

> +	return -1;
> +}
> +
>  static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  {
>  	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> @@ -1174,9 +1280,6 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	if (!sev_guest(kvm))
>  		return -ENOTTY;
>  
> -	if (sev->es_active)
> -		return -EPERM;
> -
>  	if (sev->info_token != 0)
>  		return -EEXIST;
>  
> @@ -1196,8 +1299,19 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	INIT_LIST_HEAD(&entry->regions_list);
>  	list_replace_init(&sev->regions_list, &entry->regions_list);
>  
> +	if (sev_es_guest(kvm)) {
> +		/*
> +		 * If this is an ES guest, we need to move each VMCB's VMSA into a
> +		 * list for migration.
> +		 */
> +		entry->es_enabled = true;
> +		entry->ap_jump_table = sev->ap_jump_table;
> +		if (create_vmsa_list(kvm, entry))
> +			goto e_listdel;
> +	}
> +
>  	if (place_migration_node(entry))
> -		goto e_listdel;
> +		goto e_vmsadel;
>  
>  	token = entry->token;
>  
> @@ -1215,6 +1329,11 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	hash_del(&entry->hnode);
>  	spin_unlock(&sev_info_migration_hash_lock);
>  
> +e_vmsadel:
> +	if (sev_es_guest(kvm) && process_vmsa_list(kvm, &entry->vmsa_list))
> +		WARN(1,
> +		     "Unable to move VMSA list back to source VM. Guest is in a broken state now.");

Guess what today's Final Jeopardy answer is? :-D

> +
>  e_listdel:
>  	list_replace_init(&entry->regions_list, &sev->regions_list);
>  
> @@ -1233,9 +1352,6 @@ static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	if (!sev_guest(kvm))
>  		return -ENOTTY;
>  
> -	if (sev->es_active)
> -		return -EPERM;
> -
>  	if (sev->handle != 0)
>  		return -EPERM;
>  
> @@ -1254,6 +1370,14 @@ static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  
>  	memcpy(&old_info, sev, sizeof(old_info));
>  
> +	if (entry->es_enabled) {
> +		if (process_vmsa_list(kvm, &entry->vmsa_list))
> +			goto err_unlock;
> +
> +		sev->es_active = true;
> +		sev->ap_jump_table = entry->ap_jump_table;
> +	}
> +
>  	/*
>  	 * The source VM always frees @entry On the target we simply
>  	 * mark the token as invalid to notify the source the sev info
> @@ -2046,12 +2170,22 @@ void sev_vm_destroy(struct kvm *kvm)
>  		__unregister_region_list_locked(kvm, &sev->regions_list);
>  	}
>  
> -	/*
> -	 * If userspace was terminated before unregistering the memory
> -	 * regions then lets unpin all the registered memory.
> -	 */
> -	if (entry)
> +	if (entry) {
> +		/*
> +		 * If there are any saved VMSAs, restore them so they can be
> +		 * destructed through the normal path.
> +		 */
> +		if (entry->es_enabled)
> +			if (process_vmsa_list(kvm, &entry->vmsa_list))
> +				WARN(1,
> +				     "Unable to clean up vmsa_list");

More code that can be zapped if process_vmsa_list() is less of a zealot.

> +
> +		/*
> +		 * If userspace was terminated before unregistering the memory
> +		 * regions then lets unpin all the registered memory.
> +		 */
>  		__unregister_region_list_locked(kvm, &entry->regions_list);
> +	}
>  
>  	mutex_unlock(&kvm->lock);
>  
> @@ -2243,9 +2377,11 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
>  
>  	svm = to_svm(vcpu);
>  
> -	if (vcpu->arch.guest_state_protected)
> +	if (svm->ghcb && vcpu->arch.guest_state_protected)
>  		sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE);
> -	__free_page(virt_to_page(svm->vmsa));
> +
> +	if (svm->vmsa)
> +		__free_page(virt_to_page(svm->vmsa));
>  
>  	if (svm->ghcb_sa_free)
>  		kfree(svm->ghcb_sa);
> -- 
> 2.32.0.288.g62a8d224e6-goog
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions
  2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
  2021-07-12 20:05   ` Brijesh Singh
  2021-07-13 21:40   ` Sean Christopherson
@ 2021-07-13 23:18   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Marc Orr @ 2021-07-13 23:18 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm list, Paolo Bonzini, Sean Christopherson, David Rientjes,
	Dr . David Alan Gilbert, Brijesh Singh, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

On Mon, Jun 21, 2021 at 9:59 AM Peter Gonda <pgonda@google.com> wrote:
>
> Factor out helper function for freeing the encrypted region list.
>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
>
> ---
>  arch/x86/kvm/svm/sev.c | 26 +++++++++++++++++---------
>  1 file changed, 17 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 46e339c84998..5af46ff6ec48 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1767,11 +1767,25 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
>         return ret;
>  }
>
> +static void __unregister_region_list_locked(struct kvm *kvm,
> +                                           struct list_head *mem_regions)
> +{
> +       struct enc_region *pos, *q;
> +
> +       lockdep_assert_held(&kvm->lock);
> +
> +       if (list_empty(mem_regions))
> +               return;
> +
> +       list_for_each_entry_safe(pos, q, mem_regions, list) {
> +               __unregister_enc_region_locked(kvm, pos);
> +               cond_resched();
> +       }
> +}
> +
>  void sev_vm_destroy(struct kvm *kvm)
>  {
>         struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> -       struct list_head *head = &sev->regions_list;
> -       struct list_head *pos, *q;
>
>         if (!sev_guest(kvm))
>                 return;
> @@ -1795,13 +1809,7 @@ void sev_vm_destroy(struct kvm *kvm)
>          * if userspace was terminated before unregistering the memory regions
>          * then lets unpin all the registered memory.
>          */
> -       if (!list_empty(head)) {
> -               list_for_each_safe(pos, q, head) {
> -                       __unregister_enc_region_locked(kvm,
> -                               list_entry(pos, struct enc_region, list));
> -                       cond_resched();
> -               }
> -       }
> +       __unregister_region_list_locked(kvm, &sev->regions_list);
>
>         mutex_unlock(&kvm->lock);
>
> --
> 2.32.0.288.g62a8d224e6-goog

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] KVM, SEV: Add support for SEV local migration
  2021-06-21 16:31 ` [PATCH 2/3] KVM, SEV: Add support for SEV local migration Peter Gonda
  2021-07-12 21:09   ` Brijesh Singh
  2021-07-13 22:12   ` Sean Christopherson
@ 2021-07-13 23:24   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Marc Orr @ 2021-07-13 23:24 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm list, Lars Bull, Paolo Bonzini, Sean Christopherson,
	David Rientjes, Dr . David Alan Gilbert, Brijesh Singh,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	linux-kernel

On Mon, Jun 21, 2021 at 9:59 AM Peter Gonda <pgonda@google.com> wrote:
>
> Local migration provides a low-cost mechanism for userspace VMM upgrades.
> It is an alternative to traditional (i.e., remote) live migration. Whereas
> remote migration handles move a guest to a new host, local migration only
> handles moving a guest to a new userspace VMM within a host.
>
> For SEV to work with local migration, contents of the SEV info struct
> such as the ASID (used to index the encryption key in the AMD SP) and
> the list
> of memory regions need to be transferred to the target VM. Adds
> commands for sending and receiving the sev info.
>
> To avoid exposing this internal state to userspace and prevent other
> processes from importing state they shouldn't have access to, the send
> returns a token to userspace that is handed off to the target VM. The
> target passes in this token to receive the sent state. The token is only
> valid for one-time use. Functionality on the source becomes limited
> after
> send has been performed. If the source is destroyed before the target
> has
> received, the token becomes invalid.
>
> The target is expected to be initialized (sev_guest_init), but not
> launched
> state (sev_launch_start) when performing receive. Once the target has
> received, it will be in a launched state and will not need to perform
> the
> typical SEV launch commands.
>
> Co-developed-by: Lars Bull <larsbull@google.com>
> Signed-off-by: Lars Bull <larsbull@google.com>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
>
> ---
>  .../virt/kvm/amd-memory-encryption.rst        |  43 +++
>  arch/x86/kvm/svm/sev.c                        | 270 +++++++++++++++++-
>  arch/x86/kvm/svm/svm.h                        |   1 +
>  include/uapi/linux/kvm.h                      |  12 +
>  4 files changed, 317 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
> index 5ec8a1902e15..0f9030e3dcfe 100644
> --- a/Documentation/virt/kvm/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/amd-memory-encryption.rst
> @@ -427,6 +427,49 @@ issued by the hypervisor to make the guest ready for execution.
>
>  Returns: 0 on success, -negative on error
>
> +10. KVM_SEV_LOCAL_SEND
> +----------------------------------
> +
> +The KVM_SEV_LOCAL_SEND command is used to stage the VM's SEV info
> +for the purposes of migrating memory to a new local VM while using the same SEV
> +key. If the source VM is destroyed before the staged info has been received by
> +the target, the info is lost. Once the info has been staged, only commands
> +KVM_SEV_DBG_DECRYPT, and KVM_SEV_DBG_ENCRYPT
> +can be used by the source.
> +
> +Parameters (out): struct kvm_sev_local_send
> +
> +Returns: 0 on success, -negative on error
> +
> +::
> +
> +    struct kvm_sev_local_send {
> +        __u64 info_token;    /* token referencing the staged info */
> +    };
> +
> +11. KVM_SEV_LOCAL_RECEIVE
> +-------------------------------------
> +
> +The KVM_SEV_LOCAL_RECEIVE command is used to transfer staged SEV
> +info to a target VM from some source VM. SEV on the target VM should be active
> +when receive is performed, but not yet launched and without any pinned memory.
> +The launch commands should be skipped after receive because they should have
> +already been performed on the source.
> +
> +Parameters (in/out): struct kvm_sev_local_receive
> +
> +Returns: 0 on success, -negative on error
> +
> +::
> +
> +    struct kvm_sev_local_receive {
> +        __u64 info_token;    /* token referencing the staged info */
> +        __u32 handle;        /* guest handle */
> +    };
> +
> +On success, the 'handle' field contains the handle for this SEV guest.
> +
> +
>  References
>  ==========
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 5af46ff6ec48..7c33ad2b910d 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -14,6 +14,7 @@
>  #include <linux/psp-sev.h>
>  #include <linux/pagemap.h>
>  #include <linux/swap.h>
> +#include <linux/random.h>
>  #include <linux/misc_cgroup.h>
>  #include <linux/processor.h>
>  #include <linux/trace_events.h>
> @@ -57,6 +58,8 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
>  #define sev_es_enabled false
>  #endif /* CONFIG_KVM_AMD_SEV */
>
> +#define MAX_RAND_RETRY    3
> +
>  static u8 sev_enc_bit;
>  static DECLARE_RWSEM(sev_deactivate_lock);
>  static DEFINE_MUTEX(sev_bitmap_lock);
> @@ -74,6 +77,22 @@ struct enc_region {
>         unsigned long size;
>  };
>
> +struct sev_info_migration_node {
> +       struct hlist_node hnode;
> +       u64 token;
> +       bool valid;
> +
> +       unsigned int asid;
> +       unsigned int handle;
> +       unsigned long pages_locked;
> +       struct list_head regions_list;
> +       struct misc_cg *misc_cg;
> +};
> +
> +#define SEV_INFO_MIGRATION_HASH_BITS    7
> +static DEFINE_HASHTABLE(sev_info_migration_hash, SEV_INFO_MIGRATION_HASH_BITS);
> +static DEFINE_SPINLOCK(sev_info_migration_hash_lock);
> +
>  /* Called with the sev_bitmap_lock held, or on shutdown  */
>  static int sev_flush_asids(int min_asid, int max_asid)
>  {
> @@ -1094,6 +1113,185 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
>         return ret;
>  }
>
> +static struct sev_info_migration_node *find_migration_info(unsigned long token)
> +{
> +       struct sev_info_migration_node *entry;
> +
> +       hash_for_each_possible(sev_info_migration_hash, entry, hnode, token) {
> +               if (entry->token == token)
> +                       return entry;
> +       }
> +
> +       return NULL;
> +}
> +
> +/*
> + * Places @entry into the |sev_info_migration_hash|. Returns 0 if successful
> + * and ownership of @entry is transferred to the hashmap.
> + */
> +static int place_migration_node(struct sev_info_migration_node *entry)
> +{
> +       u64 token = 0;
> +       unsigned int retries;
> +       int ret = -EFAULT;
> +
> +       /*
> +        * Generate a token associated with this VM's SEV info that userspace
> +        * can use to import on the other side. We use 0 to indicate a not-
> +        * present token. The token cannot collide with other existing ones, so
> +        * reroll a few times until we get a valid token. In the unlikely event
> +        * we're having trouble generating a unique token, give up and let
> +        * userspace retry if it needs to.
> +        */
> +       spin_lock(&sev_info_migration_hash_lock);
> +       for (retries = 0; retries < MAX_RAND_RETRY; retries++)  {
> +               get_random_bytes((void *)&token, sizeof(token));
> +
> +               if (find_migration_info(token))
> +                       continue;
> +
> +               entry->token = token;
> +               entry->valid = true;
> +
> +               hash_add(sev_info_migration_hash, &entry->hnode, token);
> +               ret = 0;
> +               goto out;
> +       }
> +
> +out:
> +       spin_unlock(&sev_info_migration_hash_lock);
> +       return ret;
> +}
> +
> +static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       struct sev_info_migration_node *entry;
> +       struct kvm_sev_local_send params;
> +       u64 token;
> +       int ret = -EFAULT;
> +
> +       if (!sev_guest(kvm))
> +               return -ENOTTY;
> +
> +       if (sev->es_active)
> +               return -EPERM;
> +
> +       if (sev->info_token != 0)
> +               return -EEXIST;
> +
> +       if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> +                          sizeof(params)))
> +               return -EFAULT;
> +
> +       entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> +       if (!entry)
> +               return -ENOMEM;
> +
> +       entry->asid = sev->asid;
> +       entry->handle = sev->handle;
> +       entry->pages_locked = sev->pages_locked;
> +       entry->misc_cg = sev->misc_cg;
> +
> +       INIT_LIST_HEAD(&entry->regions_list);
> +       list_replace_init(&sev->regions_list, &entry->regions_list);
> +
> +       if (place_migration_node(entry))
> +               goto e_listdel;
> +
> +       token = entry->token;
> +
> +       params.info_token = token;
> +       if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
> +                        sizeof(params)))
> +               goto e_hashdel;
> +
> +       sev->info_token = token;
> +
> +       return 0;
> +
> +e_hashdel:
> +       spin_lock(&sev_info_migration_hash_lock);
> +       hash_del(&entry->hnode);
> +       spin_unlock(&sev_info_migration_hash_lock);
> +
> +e_listdel:
> +       list_replace_init(&entry->regions_list, &sev->regions_list);
> +
> +       kfree(entry);
> +
> +       return ret;
> +}
> +
> +static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       struct sev_info_migration_node *entry;
> +       struct kvm_sev_local_receive params;
> +       struct kvm_sev_info old_info;
> +
> +       if (!sev_guest(kvm))
> +               return -ENOTTY;
> +
> +       if (sev->es_active)
> +               return -EPERM;
> +
> +       if (sev->handle != 0)
> +               return -EPERM;
> +
> +       if (!list_empty(&sev->regions_list))
> +               return -EPERM;
> +
> +       if (copy_from_user(&params,
> +                          (void __user *)(uintptr_t)argp->data,
> +                          sizeof(params)))
> +               return -EFAULT;
> +
> +       spin_lock(&sev_info_migration_hash_lock);
> +       entry = find_migration_info(params.info_token);
> +       if (!entry || !entry->valid)
> +               goto err_unlock;
> +
> +       memcpy(&old_info, sev, sizeof(old_info));
> +
> +       /*
> +        * The source VM always frees @entry On the target we simply
> +        * mark the token as invalid to notify the source the sev info
> +        * has been moved successfully.
> +        */
> +       entry->valid = false;
> +       sev->active = true;
> +       sev->asid = entry->asid;
> +       sev->handle = entry->handle;
> +       sev->pages_locked = entry->pages_locked;
> +       sev->misc_cg = entry->misc_cg;
> +
> +       INIT_LIST_HEAD(&sev->regions_list);
> +       list_replace_init(&entry->regions_list, &sev->regions_list);
> +
> +       spin_unlock(&sev_info_migration_hash_lock);
> +
> +       params.handle = sev->handle;
> +
> +       if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
> +                        sizeof(params)))
> +               goto err_unwind;
> +
> +       sev_asid_free(&old_info);
> +       return 0;
> +
> +err_unwind:
> +       spin_lock(&sev_info_migration_hash_lock);
> +       list_replace_init(&sev->regions_list, &entry->regions_list);
> +       entry->valid = true;
> +       memcpy(sev, &old_info, sizeof(*sev));
> +
> +err_unlock:
> +       spin_unlock(&sev_info_migration_hash_lock);
> +
> +       return -EFAULT;
> +}
> +
>  /* Userspace wants to query session length. */
>  static int
>  __sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
> @@ -1513,6 +1711,18 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>                 goto out;
>         }
>
> +       /*
> +        * If this VM has started exporting its SEV contents to another VM,
> +        * it's not allowed to do any more SEV operations that may modify the
> +        * SEV state.
> +        */
> +       if (to_kvm_svm(kvm)->sev_info.info_token &&
> +           sev_cmd.id != KVM_SEV_DBG_ENCRYPT &&
> +           sev_cmd.id != KVM_SEV_DBG_DECRYPT) {
> +               r = -EPERM;
> +               goto out;
> +       }
> +
>         switch (sev_cmd.id) {
>         case KVM_SEV_ES_INIT:
>                 if (!sev_es_enabled) {
> @@ -1553,6 +1763,12 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>         case KVM_SEV_GET_ATTESTATION_REPORT:
>                 r = sev_get_attestation_report(kvm, &sev_cmd);
>                 break;
> +       case KVM_SEV_LOCAL_SEND:
> +               r = sev_local_send(kvm, &sev_cmd);
> +               break;
> +       case KVM_SEV_LOCAL_RECEIVE:
> +               r = sev_local_receive(kvm, &sev_cmd);
> +               break;
>         case KVM_SEV_SEND_START:
>                 r = sev_send_start(kvm, &sev_cmd);
>                 break;
> @@ -1786,6 +2002,8 @@ static void __unregister_region_list_locked(struct kvm *kvm,
>  void sev_vm_destroy(struct kvm *kvm)
>  {
>         struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       struct sev_info_migration_node *entry = NULL;
> +       bool info_migrated = false;
>
>         if (!sev_guest(kvm))
>                 return;
> @@ -1796,25 +2014,59 @@ void sev_vm_destroy(struct kvm *kvm)
>                 return;
>         }
>
> +       /*
> +        * If userspace has requested that we migrate the SEV info to a new VM,
> +        * then we own and must remove an entry node in the tracking data
> +        * structure. Whether we clean up the data in our SEV info struct and
> +        * entry node depends on whether userspace has done the migration,
> +        * which transfers ownership to a new VM. We can identify that
> +        * migration has occurred by checking if the node is marked invalid.
> +        */
> +       if (sev->info_token != 0) {
> +               spin_lock(&sev_info_migration_hash_lock);
> +               entry = find_migration_info(sev->info_token);
> +               if (entry) {
> +                       info_migrated = !entry->valid;
> +                       hash_del(&entry->hnode);
> +               } else
> +                       WARN(1,
> +                            "SEV VM was marked for export, but does not have associated export node.\n");
> +               spin_unlock(&sev_info_migration_hash_lock);
> +       }
> +
>         mutex_lock(&kvm->lock);
>
>         /*
> -        * Ensure that all guest tagged cache entries are flushed before
> -        * releasing the pages back to the system for use. CLFLUSH will
> -        * not do this, so issue a WBINVD.
> +        * Adding memory regions after a local send has started
> +        * is dangerous.
>          */
> -       wbinvd_on_all_cpus();
> +       if (sev->info_token != 0 && !list_empty(&sev->regions_list)) {
> +               WARN(1,
> +                    "Source SEV regions list non-empty after export request. List is not expected to be modified after export request.\n");
> +               __unregister_region_list_locked(kvm, &sev->regions_list);
> +       }
>
>         /*
> -        * if userspace was terminated before unregistering the memory regions
> -        * then lets unpin all the registered memory.
> +        * If userspace was terminated before unregistering the memory
> +        * regions then lets unpin all the registered memory.
>          */
> -       __unregister_region_list_locked(kvm, &sev->regions_list);
> +       if (entry)
> +               __unregister_region_list_locked(kvm, &entry->regions_list);
>
>         mutex_unlock(&kvm->lock);
>
> -       sev_unbind_asid(kvm, sev->handle);
> -       sev_asid_free(sev);
> +       /*
> +        * Ensure that all guest tagged cache entries are flushed before
> +        * releasing the pages back to the system for use. CLFLUSH will
> +        * not do this, so issue a WBINVD.
> +        */
> +       wbinvd_on_all_cpus();
> +       if (!info_migrated) {
> +               sev_unbind_asid(kvm, sev->handle);
> +               sev_asid_free(sev);
> +       }
> +
> +       kfree(entry);
>  }
>
>  void __init sev_set_cpu_caps(void)
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 70419e417c0d..1ae8fe623c70 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -65,6 +65,7 @@ struct kvm_sev_info {
>         u64 ap_jump_table;      /* SEV-ES AP Jump Table address */
>         struct kvm *enc_context_owner; /* Owner of copied encryption context */
>         struct misc_cg *misc_cg; /* For misc cgroup accounting */
> +       u64 info_token; /* Token for SEV info local migration */
>  };
>
>  struct kvm_svm {
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 79d9c44d1ad7..b317d4b2507d 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1678,6 +1678,9 @@ enum sev_cmd_id {
>         KVM_SEV_GET_ATTESTATION_REPORT,
>         /* Guest Migration Extension */
>         KVM_SEV_SEND_CANCEL,
> +       /* Local migration commands */
> +       KVM_SEV_LOCAL_SEND,
> +       KVM_SEV_LOCAL_RECEIVE,
>
>         KVM_SEV_NR_MAX,
>  };
> @@ -1775,6 +1778,15 @@ struct kvm_sev_receive_update_data {
>         __u32 trans_len;
>  };
>
> +struct kvm_sev_local_send {
> +       __u64 info_token;
> +};
> +
> +struct kvm_sev_local_receive {
> +       __u64 info_token;
> +       __u32 handle;
> +};
> +
>  #define KVM_DEV_ASSIGN_ENABLE_IOMMU    (1 << 0)
>  #define KVM_DEV_ASSIGN_PCI_2_3         (1 << 1)
>  #define KVM_DEV_ASSIGN_MASK_INTX       (1 << 2)
> --
> 2.32.0.288.g62a8d224e6-goog

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] KVM, SEV: Add support for SEV-ES local migration
  2021-06-21 16:31 ` [PATCH 3/3] KVM, SEV: Add support for SEV-ES " Peter Gonda
  2021-07-13 18:41   ` Brijesh Singh
  2021-07-13 22:21   ` Sean Christopherson
@ 2021-07-13 23:25   ` Marc Orr
  2 siblings, 0 replies; 14+ messages in thread
From: Marc Orr @ 2021-07-13 23:25 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm list, Paolo Bonzini, Sean Christopherson, David Rientjes,
	Dr . David Alan Gilbert, Brijesh Singh, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-kernel

On Mon, Jun 21, 2021 at 9:59 AM Peter Gonda <pgonda@google.com> wrote:
>
> Local migration provides a low-cost mechanism for userspace VMM upgrades.
> It is an alternative to traditional (i.e., remote) live migration. Whereas
> remote migration handles move a guest to a new host, local migration only
> handles moving a guest to a new userspace VMM within a host.
>
> For SEV-ES to work with local migration the VMSAs, GHCB metadata,
> and other SEV-ES info needs to be preserved along with the guest's
> memory. KVM maintains a pointer to each vCPUs GHCB and may additionally
> contain an copy of the GHCB's save area if the guest has been using it
> for NAE handling. The local send and receive ioctls have been updated to
> move this additional metadata required for each vCPU in SEV-ES into
> hashmap for SEV local migration data.
>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
>
> ---
>  arch/x86/kvm/svm/sev.c | 164 +++++++++++++++++++++++++++++++++++++----
>  1 file changed, 150 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 7c33ad2b910d..33df7ed08d21 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -77,6 +77,19 @@ struct enc_region {
>         unsigned long size;
>  };
>
> +struct vmsa_node {
> +       struct list_head list;
> +       int vcpu_id;
> +       struct vmcb_save_area *vmsa;
> +       struct ghcb *ghcb;
> +       u64 ghcb_gpa;
> +
> +       void *ghcb_sa;
> +       u64 ghcb_sa_len;
> +       bool ghcb_sa_sync;
> +       bool ghcb_sa_free;
> +};
> +
>  struct sev_info_migration_node {
>         struct hlist_node hnode;
>         u64 token;
> @@ -87,6 +100,11 @@ struct sev_info_migration_node {
>         unsigned long pages_locked;
>         struct list_head regions_list;
>         struct misc_cg *misc_cg;
> +
> +       /* The following fields are for SEV-ES guests */
> +       bool es_enabled;
> +       struct list_head vmsa_list;
> +       u64 ap_jump_table;
>  };
>
>  #define SEV_INFO_MIGRATION_HASH_BITS    7
> @@ -1163,6 +1181,94 @@ static int place_migration_node(struct sev_info_migration_node *entry)
>         return ret;
>  }
>
> +static int process_vmsa_list(struct kvm *kvm, struct list_head *vmsa_list)
> +{
> +       struct vmsa_node *vmsa_node, *q;
> +       struct kvm_vcpu *vcpu;
> +       struct vcpu_svm *svm;
> +
> +       lockdep_assert_held(&kvm->lock);
> +
> +       if (!vmsa_list)
> +               return 0;
> +
> +       list_for_each_entry(vmsa_node, vmsa_list, list) {
> +               if (!kvm_get_vcpu_by_id(kvm, vmsa_node->vcpu_id)) {
> +                       WARN(1,
> +                            "Failed to find VCPU with ID %d despite presence in VMSA list.\n",
> +                            vmsa_node->vcpu_id);
> +                       return -1;
> +               }
> +       }
> +
> +       /*
> +        * Move any stashed VMSAs back to their respective VMCBs and delete
> +        * those nodes.
> +        */
> +       list_for_each_entry_safe(vmsa_node, q, vmsa_list, list) {
> +               vcpu = kvm_get_vcpu_by_id(kvm, vmsa_node->vcpu_id);
> +               svm = to_svm(vcpu);
> +               svm->vmsa = vmsa_node->vmsa;
> +               svm->ghcb = vmsa_node->ghcb;
> +               svm->vmcb->control.ghcb_gpa = vmsa_node->ghcb_gpa;
> +               svm->vcpu.arch.guest_state_protected = true;
> +               svm->vmcb->control.vmsa_pa = __pa(svm->vmsa);
> +               svm->ghcb_sa = vmsa_node->ghcb_sa;
> +               svm->ghcb_sa_len = vmsa_node->ghcb_sa_len;
> +               svm->ghcb_sa_sync = vmsa_node->ghcb_sa_sync;
> +               svm->ghcb_sa_free = vmsa_node->ghcb_sa_free;
> +
> +               list_del(&vmsa_node->list);
> +               kfree(vmsa_node);
> +       }
> +
> +       return 0;
> +}
> +
> +static int create_vmsa_list(struct kvm *kvm,
> +                           struct sev_info_migration_node *entry)
> +{
> +       int i;
> +       const int num_vcpus = atomic_read(&kvm->online_vcpus);
> +       struct vmsa_node *node;
> +       struct kvm_vcpu *vcpu;
> +       struct vcpu_svm *svm;
> +
> +       INIT_LIST_HEAD(&entry->vmsa_list);
> +       for (i = 0; i < num_vcpus; ++i) {
> +               node = kzalloc(sizeof(*node), GFP_KERNEL);
> +               if (!node)
> +                       goto e_freelist;
> +
> +               vcpu = kvm->vcpus[i];
> +               node->vcpu_id = vcpu->vcpu_id;
> +
> +               svm = to_svm(vcpu);
> +               node->vmsa = svm->vmsa;
> +               svm->vmsa = NULL;
> +               node->ghcb = svm->ghcb;
> +               svm->ghcb = NULL;
> +               node->ghcb_gpa = svm->vmcb->control.ghcb_gpa;
> +               node->ghcb_sa = svm->ghcb_sa;
> +               svm->ghcb_sa = NULL;
> +               node->ghcb_sa_len = svm->ghcb_sa_len;
> +               svm->ghcb_sa_len = 0;
> +               node->ghcb_sa_sync = svm->ghcb_sa_sync;
> +               svm->ghcb_sa_sync = false;
> +               node->ghcb_sa_free = svm->ghcb_sa_free;
> +               svm->ghcb_sa_free = false;
> +
> +               list_add_tail(&node->list, &entry->vmsa_list);
> +       }
> +
> +       return 0;
> +
> +e_freelist:
> +       if (process_vmsa_list(kvm, &entry->vmsa_list))
> +               WARN(1, "Unable to move VMSA list back to source VM. Guest is in a broken state now.");
> +       return -1;
> +}
> +
>  static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  {
>         struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> @@ -1174,9 +1280,6 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>         if (!sev_guest(kvm))
>                 return -ENOTTY;
>
> -       if (sev->es_active)
> -               return -EPERM;
> -
>         if (sev->info_token != 0)
>                 return -EEXIST;
>
> @@ -1196,8 +1299,19 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>         INIT_LIST_HEAD(&entry->regions_list);
>         list_replace_init(&sev->regions_list, &entry->regions_list);
>
> +       if (sev_es_guest(kvm)) {
> +               /*
> +                * If this is an ES guest, we need to move each VMCB's VMSA into a
> +                * list for migration.
> +                */
> +               entry->es_enabled = true;
> +               entry->ap_jump_table = sev->ap_jump_table;
> +               if (create_vmsa_list(kvm, entry))
> +                       goto e_listdel;
> +       }
> +
>         if (place_migration_node(entry))
> -               goto e_listdel;
> +               goto e_vmsadel;
>
>         token = entry->token;
>
> @@ -1215,6 +1329,11 @@ static int sev_local_send(struct kvm *kvm, struct kvm_sev_cmd *argp)
>         hash_del(&entry->hnode);
>         spin_unlock(&sev_info_migration_hash_lock);
>
> +e_vmsadel:
> +       if (sev_es_guest(kvm) && process_vmsa_list(kvm, &entry->vmsa_list))
> +               WARN(1,
> +                    "Unable to move VMSA list back to source VM. Guest is in a broken state now.");
> +
>  e_listdel:
>         list_replace_init(&entry->regions_list, &sev->regions_list);
>
> @@ -1233,9 +1352,6 @@ static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
>         if (!sev_guest(kvm))
>                 return -ENOTTY;
>
> -       if (sev->es_active)
> -               return -EPERM;
> -
>         if (sev->handle != 0)
>                 return -EPERM;
>
> @@ -1254,6 +1370,14 @@ static int sev_local_receive(struct kvm *kvm, struct kvm_sev_cmd *argp)
>
>         memcpy(&old_info, sev, sizeof(old_info));
>
> +       if (entry->es_enabled) {
> +               if (process_vmsa_list(kvm, &entry->vmsa_list))
> +                       goto err_unlock;
> +
> +               sev->es_active = true;
> +               sev->ap_jump_table = entry->ap_jump_table;
> +       }
> +
>         /*
>          * The source VM always frees @entry On the target we simply
>          * mark the token as invalid to notify the source the sev info
> @@ -2046,12 +2170,22 @@ void sev_vm_destroy(struct kvm *kvm)
>                 __unregister_region_list_locked(kvm, &sev->regions_list);
>         }
>
> -       /*
> -        * If userspace was terminated before unregistering the memory
> -        * regions then lets unpin all the registered memory.
> -        */
> -       if (entry)
> +       if (entry) {
> +               /*
> +                * If there are any saved VMSAs, restore them so they can be
> +                * destructed through the normal path.
> +                */
> +               if (entry->es_enabled)
> +                       if (process_vmsa_list(kvm, &entry->vmsa_list))
> +                               WARN(1,
> +                                    "Unable to clean up vmsa_list");
> +
> +               /*
> +                * If userspace was terminated before unregistering the memory
> +                * regions then lets unpin all the registered memory.
> +                */
>                 __unregister_region_list_locked(kvm, &entry->regions_list);
> +       }
>
>         mutex_unlock(&kvm->lock);
>
> @@ -2243,9 +2377,11 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
>
>         svm = to_svm(vcpu);
>
> -       if (vcpu->arch.guest_state_protected)
> +       if (svm->ghcb && vcpu->arch.guest_state_protected)
>                 sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE);
> -       __free_page(virt_to_page(svm->vmsa));
> +
> +       if (svm->vmsa)
> +               __free_page(virt_to_page(svm->vmsa));
>
>         if (svm->ghcb_sa_free)
>                 kfree(svm->ghcb_sa);
> --
> 2.32.0.288.g62a8d224e6-goog
>

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-07-13 23:26 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-21 16:31 [PATCH 0/3] Add AMD SEV and SEV-ES local migration support Peter Gonda
2021-06-21 16:31 ` [PATCH 1/3] KVM, SEV: Refactor out function for unregistering encrypted regions Peter Gonda
2021-07-12 20:05   ` Brijesh Singh
2021-07-13 21:40   ` Sean Christopherson
2021-07-13 23:18   ` Marc Orr
2021-06-21 16:31 ` [PATCH 2/3] KVM, SEV: Add support for SEV local migration Peter Gonda
2021-07-12 21:09   ` Brijesh Singh
2021-07-13 20:12     ` Peter Gonda
2021-07-13 22:12   ` Sean Christopherson
2021-07-13 23:24   ` Marc Orr
2021-06-21 16:31 ` [PATCH 3/3] KVM, SEV: Add support for SEV-ES " Peter Gonda
2021-07-13 18:41   ` Brijesh Singh
2021-07-13 22:21   ` Sean Christopherson
2021-07-13 23:25   ` Marc Orr

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.