All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 00/18] Add AMD SEV guest live migration support
@ 2020-05-05 21:13 Ashish Kalra
  2020-05-05 21:14 ` [PATCH v8 01/18] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
                   ` (18 more replies)
  0 siblings, 19 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:13 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

The series add support for AMD SEV guest live migration commands. To protect the
confidentiality of an SEV protected guest memory while in transit we need to
use the SEV commands defined in SEV API spec [1].

SEV guest VMs have the concept of private and shared memory. Private memory
is encrypted with the guest-specific key, while shared memory may be encrypted
with hypervisor key. The commands provided by the SEV FW are meant to be used
for the private memory only. The patch series introduces a new hypercall.
The guest OS can use this hypercall to notify the page encryption status.
If the page is encrypted with guest specific-key then we use SEV command during
the migration. If page is not encrypted then fallback to default.

The patch adds new ioctls KVM_{SET,GET}_PAGE_ENC_BITMAP. The ioctl can be used
by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
during the migration to know whether the page is encrypted.

This section descibes how the SEV live migration feature is negotiated
between the host and guest, the host indicates this feature support via 
KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
sets a UEFI enviroment variable indicating OVMF support for live
migration, the guest kernel also detects the host support for this
feature via cpuid and in case of an EFI boot verifies if OVMF also
supports this feature by getting the UEFI enviroment variable and if it
set then enables live migration feature on host by writing to a custom
MSR, if not booted under EFI, then it simply enables the feature by
again writing to the custom MSR. The host returns error as part of
SET_PAGE_ENC_BITMAP ioctl if guest has not enabled live migration.

A branch containing these patches is available here:
https://github.com/AMDESE/linux/tree/sev-migration-v8

[1] https://developer.amd.com/wp-content/resources/55766.PDF

Changes since v7:
- Removed the hypervisor specific hypercall/paravirt callback for
  SEV live migration and moved back to calling kvm_sev_hypercall3 
  directly.
- Fix build errors as
  Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
  build error when CONFIG_HYPERVISOR_GUEST=y and
  CONFIG_AMD_MEM_ENCRYPT=n.
- Implicitly enabled live migration for incoming VM(s) to handle 
  A->B->C->... VM migrations.
- Fixed Documentation as per comments on v6 patches.
- Fixed error return path in sev_send_update_data() as per comments 
  on v6 patches. 

Changes since v6:
- Rebasing to mainline and refactoring to the new split SVM
  infrastructre.
- Move to static allocation of the unified Page Encryption bitmap
  instead of the dynamic resizing of the bitmap, the static allocation
  is done implicitly by extending kvm_arch_commit_memory_region() callack
  to add svm specific x86_ops which can read the userspace provided memory
  region/memslots and calculate the amount of guest RAM managed by the KVM
  and grow the bitmap.
- Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
  of simply clearing specific bits.
- Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
  KVM_SET_PAGE_ENC_BITMAP.
- Extended guest support for enabling Live Migration feature by adding a
  check for UEFI environment variable indicating OVMF support for Live
  Migration feature and additionally checking for KVM capability for the
  same feature. If not booted under EFI, then we simply check for KVM
  capability.
- Add hypervisor specific hypercall for SEV live migration by adding
  a new paravirt callback as part of x86_hyper_runtime.
  (x86 hypervisor specific runtime callbacks)
- Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code 
  and adding check for SEV live migration enabled by guest in the 
  KVM_GET_PAGE_ENC_BITMAP ioctl.
- Instead of the complete __bss_decrypted section, only specific variables
  such as hv_clock_boot and wall_clock are marked as decrypted in the
  page encryption bitmap

Changes since v5:
- Fix build errors as
  Reported-by: kbuild test robot <lkp@intel.com>

Changes since v4:
- Host support has been added to extend KVM capabilities/feature bits to 
  include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
  query for host-side support for SEV live migration and a new custom MSR
  MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
  migration feature.
- Ensure that _bss_decrypted section is marked as decrypted in the
  page encryption bitmap.
- Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
  as per the number of pages being requested by the user. Ensure that
  we only copy bmap->num_pages bytes in the userspace buffer, if
  bmap->num_pages is not byte aligned we read the trailing bits
  from the userspace and copy those bits as is. This fixes guest
  page(s) corruption issues observed after migration completion.
- Add kexec support for SEV Live Migration to reset the host's
  page encryption bitmap related to kernel specific page encryption
  status settings before we load a new kernel by kexec. We cannot
  reset the complete page encryption bitmap here as we need to
  retain the UEFI/OVMF firmware specific settings.

Changes since v3:
- Rebasing to mainline and testing.
- Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the 
  page encryption bitmap on a guest reboot event.
- Adding a more reliable sanity check for GPA range being passed to
  the hypercall to ensure that guest MMIO ranges are also marked
  in the page encryption bitmap.

Changes since v2:
 - reset the page encryption bitmap on vcpu reboot

Changes since v1:
 - Add support to share the page encryption between the source and target
   machine.
 - Fix review feedbacks from Tom Lendacky.
 - Add check to limit the session blob length.
 - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
   the memory slot when querying the bitmap.

Ashish Kalra (7):
  KVM: SVM: Add support for static allocation of unified Page Encryption
    Bitmap.
  KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
    Custom MSR.
  EFI: Introduce the new AMD Memory Encryption GUID.
  KVM: x86: Add guest support for detecting and enabling SEV Live
    Migration feature.
  KVM: x86: Mark _bss_decrypted section variables as decrypted in page
    encryption bitmap.
  KVM: x86: Add kexec support for SEV Live Migration.
  KVM: SVM: Enable SEV live migration feature implicitly on Incoming
    VM(s).

Brijesh Singh (11):
  KVM: SVM: Add KVM_SEV SEND_START command
  KVM: SVM: Add KVM_SEND_UPDATE_DATA command
  KVM: SVM: Add KVM_SEV_SEND_FINISH command
  KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
  KVM: x86: Add AMD SEV specific Hypercall3
  KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
  mm: x86: Invoke hypercall when page encryption status is changed
  KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl

 .../virt/kvm/amd-memory-encryption.rst        | 120 +++
 Documentation/virt/kvm/api.rst                |  71 ++
 Documentation/virt/kvm/cpuid.rst              |   5 +
 Documentation/virt/kvm/hypercalls.rst         |  15 +
 Documentation/virt/kvm/msr.rst                |  10 +
 arch/x86/include/asm/kvm_host.h               |   7 +
 arch/x86/include/asm/kvm_para.h               |  12 +
 arch/x86/include/asm/mem_encrypt.h            |  11 +
 arch/x86/include/asm/paravirt.h               |  10 +
 arch/x86/include/asm/paravirt_types.h         |   2 +
 arch/x86/include/uapi/asm/kvm_para.h          |   5 +
 arch/x86/kernel/kvm.c                         |  90 +++
 arch/x86/kernel/kvmclock.c                    |  12 +
 arch/x86/kernel/paravirt.c                    |   1 +
 arch/x86/kvm/svm/sev.c                        | 732 +++++++++++++++++-
 arch/x86/kvm/svm/svm.c                        |  21 +
 arch/x86/kvm/svm/svm.h                        |   9 +
 arch/x86/kvm/vmx/vmx.c                        |   1 +
 arch/x86/kvm/x86.c                            |  35 +
 arch/x86/mm/mem_encrypt.c                     |  68 +-
 arch/x86/mm/pat/set_memory.c                  |   7 +
 include/linux/efi.h                           |   1 +
 include/linux/psp-sev.h                       |   8 +-
 include/uapi/linux/kvm.h                      |  52 ++
 include/uapi/linux/kvm_para.h                 |   1 +
 25 files changed, 1297 insertions(+), 9 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH v8 01/18] KVM: SVM: Add KVM_SEV SEND_START command
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
@ 2020-05-05 21:14 ` Ashish Kalra
  2020-05-05 21:14 ` [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:14 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The command is used to create an outgoing SEV guest encryption context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  27 ++++
 arch/x86/kvm/svm/sev.c                        | 125 ++++++++++++++++++
 include/linux/psp-sev.h                       |   8 +-
 include/uapi/linux/kvm.h                      |  12 ++
 4 files changed, 168 insertions(+), 4 deletions(-)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index c3129b9ba5cb..59cb59bd4675 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -263,6 +263,33 @@ Returns: 0 on success, -negative on error
                 __u32 trans_len;
         };
 
+10. KVM_SEV_SEND_START
+----------------------
+
+The KVM_SEV_SEND_START command can be used by the hypervisor to create an
+outgoing guest encryption context.
+
+Parameters (in): struct kvm_sev_send_start
+
+Returns: 0 on success, -negative on error
+
+::
+        struct kvm_sev_send_start {
+                __u32 policy;                 /* guest policy */
+
+                __u64 pdh_cert_uaddr;         /* platform Diffie-Hellman certificate */
+                __u32 pdh_cert_len;
+
+                __u64 plat_certs_uaddr;        /* platform certificate chain */
+                __u32 plat_certs_len;
+
+                __u64 amd_certs_uaddr;        /* AMD certificate */
+                __u32 amd_certs_len;
+
+                __u64 session_uaddr;          /* Guest session information */
+                __u32 session_len;
+        };
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index cf912b4aaba8..5a15b43b4349 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -913,6 +913,128 @@ static int sev_launch_secret(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+/* Userspace wants to query session length. */
+static int
+__sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
+				      struct kvm_sev_send_start *params)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_start *data;
+	int ret;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+	if (data == NULL)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, data, &argp->error);
+
+	params->session_len = data->session_len;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, params,
+				sizeof(struct kvm_sev_send_start)))
+		ret = -EFAULT;
+
+	kfree(data);
+	return ret;
+}
+
+static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_start *data;
+	struct kvm_sev_send_start params;
+	void *amd_certs, *session_data;
+	void *pdh_cert, *plat_certs;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+				sizeof(struct kvm_sev_send_start)))
+		return -EFAULT;
+
+	/* if session_len is zero, userspace wants to query the session length */
+	if (!params.session_len)
+		return __sev_send_start_query_session_length(kvm, argp,
+				&params);
+
+	/* some sanity checks */
+	if (!params.pdh_cert_uaddr || !params.pdh_cert_len ||
+	    !params.session_uaddr || params.session_len > SEV_FW_BLOB_MAX_SIZE)
+		return -EINVAL;
+
+	/* allocate the memory to hold the session data blob */
+	session_data = kmalloc(params.session_len, GFP_KERNEL_ACCOUNT);
+	if (!session_data)
+		return -ENOMEM;
+
+	/* copy the certificate blobs from userspace */
+	pdh_cert = psp_copy_user_blob(params.pdh_cert_uaddr,
+				params.pdh_cert_len);
+	if (IS_ERR(pdh_cert)) {
+		ret = PTR_ERR(pdh_cert);
+		goto e_free_session;
+	}
+
+	plat_certs = psp_copy_user_blob(params.plat_certs_uaddr,
+				params.plat_certs_len);
+	if (IS_ERR(plat_certs)) {
+		ret = PTR_ERR(plat_certs);
+		goto e_free_pdh;
+	}
+
+	amd_certs = psp_copy_user_blob(params.amd_certs_uaddr,
+				params.amd_certs_len);
+	if (IS_ERR(amd_certs)) {
+		ret = PTR_ERR(amd_certs);
+		goto e_free_plat_cert;
+	}
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+	if (data == NULL) {
+		ret = -ENOMEM;
+		goto e_free_amd_cert;
+	}
+
+	/* populate the FW SEND_START field with system physical address */
+	data->pdh_cert_address = __psp_pa(pdh_cert);
+	data->pdh_cert_len = params.pdh_cert_len;
+	data->plat_certs_address = __psp_pa(plat_certs);
+	data->plat_certs_len = params.plat_certs_len;
+	data->amd_certs_address = __psp_pa(amd_certs);
+	data->amd_certs_len = params.amd_certs_len;
+	data->session_address = __psp_pa(session_data);
+	data->session_len = params.session_len;
+	data->handle = sev->handle;
+
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, data, &argp->error);
+
+	if (!ret && copy_to_user((void __user *)(uintptr_t)params.session_uaddr,
+			session_data, params.session_len)) {
+		ret = -EFAULT;
+		goto e_free;
+	}
+
+	params.policy = data->policy;
+	params.session_len = data->session_len;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
+				sizeof(struct kvm_sev_send_start)))
+		ret = -EFAULT;
+
+e_free:
+	kfree(data);
+e_free_amd_cert:
+	kfree(amd_certs);
+e_free_plat_cert:
+	kfree(plat_certs);
+e_free_pdh:
+	kfree(pdh_cert);
+e_free_session:
+	kfree(session_data);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -957,6 +1079,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_LAUNCH_SECRET:
 		r = sev_launch_secret(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SEND_START:
+		r = sev_send_start(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/include/linux/psp-sev.h b/include/linux/psp-sev.h
index 5167bf2bfc75..9f63b9d48b63 100644
--- a/include/linux/psp-sev.h
+++ b/include/linux/psp-sev.h
@@ -323,11 +323,11 @@ struct sev_data_send_start {
 	u64 pdh_cert_address;			/* In */
 	u32 pdh_cert_len;			/* In */
 	u32 reserved1;
-	u64 plat_cert_address;			/* In */
-	u32 plat_cert_len;			/* In */
+	u64 plat_certs_address;			/* In */
+	u32 plat_certs_len;			/* In */
 	u32 reserved2;
-	u64 amd_cert_address;			/* In */
-	u32 amd_cert_len;			/* In */
+	u64 amd_certs_address;			/* In */
+	u32 amd_certs_len;			/* In */
 	u32 reserved3;
 	u64 session_address;			/* In */
 	u32 session_len;			/* In/Out */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 428c7dde6b4b..8827d43e2684 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1598,6 +1598,18 @@ struct kvm_sev_dbg {
 	__u32 len;
 };
 
+struct kvm_sev_send_start {
+	__u32 policy;
+	__u64 pdh_cert_uaddr;
+	__u32 pdh_cert_len;
+	__u64 plat_certs_uaddr;
+	__u32 plat_certs_len;
+	__u64 amd_certs_uaddr;
+	__u32 amd_certs_len;
+	__u64 session_uaddr;
+	__u32 session_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
  2020-05-05 21:14 ` [PATCH v8 01/18] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
@ 2020-05-05 21:14 ` Ashish Kalra
  2020-05-05 22:48   ` Venu Busireddy
  2020-05-05 21:15 ` [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:14 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The command is used for encrypting the guest memory region using the encryption
context created with KVM_SEV_SEND_START.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by : Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  24 ++++
 arch/x86/kvm/svm/sev.c                        | 135 +++++++++++++++++-
 include/uapi/linux/kvm.h                      |   9 ++
 3 files changed, 164 insertions(+), 4 deletions(-)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 59cb59bd4675..d0dfa5b54e4f 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -290,6 +290,30 @@ Returns: 0 on success, -negative on error
                 __u32 session_len;
         };
 
+11. KVM_SEV_SEND_UPDATE_DATA
+----------------------------
+
+The KVM_SEV_SEND_UPDATE_DATA command can be used by the hypervisor to encrypt the
+outgoing guest memory region with the encryption context creating using
+KVM_SEV_SEND_START.
+
+Parameters (in): struct kvm_sev_send_update_data
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_launch_send_update_data {
+                __u64 hdr_uaddr;        /* userspace address containing the packet header */
+                __u32 hdr_len;
+
+                __u64 guest_uaddr;      /* the source memory region to be encrypted */
+                __u32 guest_len;
+
+                __u64 trans_uaddr;      /* the destition memory region  */
+                __u32 trans_len;
+        };
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 5a15b43b4349..7031b660f64d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -23,6 +23,7 @@ static DECLARE_RWSEM(sev_deactivate_lock);
 static DEFINE_MUTEX(sev_bitmap_lock);
 unsigned int max_sev_asid;
 static unsigned int min_sev_asid;
+static unsigned long sev_me_mask;
 static unsigned long *sev_asid_bitmap;
 static unsigned long *sev_reclaim_asid_bitmap;
 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
@@ -1035,6 +1036,123 @@ static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+/* Userspace wants to query either header or trans length. */
+static int
+__sev_send_update_data_query_lengths(struct kvm *kvm, struct kvm_sev_cmd *argp,
+				     struct kvm_sev_send_update_data *params)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_update_data *data;
+	int ret;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, data, &argp->error);
+
+	params->hdr_len = data->hdr_len;
+	params->trans_len = data->trans_len;
+
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, params,
+			 sizeof(struct kvm_sev_send_update_data)))
+		ret = -EFAULT;
+
+	kfree(data);
+	return ret;
+}
+
+static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_update_data *data;
+	struct kvm_sev_send_update_data params;
+	void *hdr, *trans_data;
+	struct page **guest_page;
+	unsigned long n;
+	int ret, offset;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			sizeof(struct kvm_sev_send_update_data)))
+		return -EFAULT;
+
+	/* userspace wants to query either header or trans length */
+	if (!params.trans_len || !params.hdr_len)
+		return __sev_send_update_data_query_lengths(kvm, argp, &params);
+
+	if (!params.trans_uaddr || !params.guest_uaddr ||
+	    !params.guest_len || !params.hdr_uaddr)
+		return -EINVAL;
+
+	/* Check if we are crossing the page boundary */
+	offset = params.guest_uaddr & (PAGE_SIZE - 1);
+	if ((params.guest_len + offset > PAGE_SIZE))
+		return -EINVAL;
+
+	/* Pin guest memory */
+	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+				    PAGE_SIZE, &n, 0);
+	if (!guest_page)
+		return -EFAULT;
+
+	/* allocate memory for header and transport buffer */
+	ret = -ENOMEM;
+	hdr = kmalloc(params.hdr_len, GFP_KERNEL_ACCOUNT);
+	if (!hdr)
+		goto e_unpin;
+
+	trans_data = kmalloc(params.trans_len, GFP_KERNEL_ACCOUNT);
+	if (!trans_data)
+		goto e_free_hdr;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		goto e_free_trans_data;
+
+	data->hdr_address = __psp_pa(hdr);
+	data->hdr_len = params.hdr_len;
+	data->trans_address = __psp_pa(trans_data);
+	data->trans_len = params.trans_len;
+
+	/* The SEND_UPDATE_DATA command requires C-bit to be always set. */
+	data->guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) +
+				offset;
+	data->guest_address |= sev_me_mask;
+	data->guest_len = params.guest_len;
+	data->handle = sev->handle;
+
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, data, &argp->error);
+
+	if (ret)
+		goto e_free;
+
+	/* copy transport buffer to user space */
+	if (copy_to_user((void __user *)(uintptr_t)params.trans_uaddr,
+			 trans_data, params.trans_len)) {
+		ret = -EFAULT;
+		goto e_free;
+	}
+
+	/* Copy packet header to userspace. */
+	ret = copy_to_user((void __user *)(uintptr_t)params.hdr_uaddr, hdr,
+				params.hdr_len);
+
+e_free:
+	kfree(data);
+e_free_trans_data:
+	kfree(trans_data);
+e_free_hdr:
+	kfree(hdr);
+e_unpin:
+	sev_unpin_memory(kvm, guest_page, n);
+
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1082,6 +1200,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SEND_START:
 		r = sev_send_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SEND_UPDATE_DATA:
+		r = sev_send_update_data(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
@@ -1238,16 +1359,22 @@ void sev_vm_destroy(struct kvm *kvm)
 int __init sev_hardware_setup(void)
 {
 	struct sev_user_data_status *status;
+	u32 eax, ebx;
 	int rc;
 
-	/* Maximum number of encrypted guests supported simultaneously */
-	max_sev_asid = cpuid_ecx(0x8000001F);
+	/*
+	 * Query the memory encryption information.
+	 *  EBX:  Bit 0:5 Pagetable bit position used to indicate encryption
+	 *  (aka Cbit).
+	 *  ECX:  Maximum number of encrypted guests supported simultaneously.
+	 *  EDX:  Minimum ASID value that should be used for SEV guest.
+	 */
+	cpuid(0x8000001f, &eax, &ebx, &max_sev_asid, &min_sev_asid);
 
 	if (!svm_sev_enabled())
 		return 1;
 
-	/* Minimum ASID value that should be used for SEV guest */
-	min_sev_asid = cpuid_edx(0x8000001F);
+	sev_me_mask = 1UL << (ebx & 0x3f);
 
 	/* Initialize SEV ASID bitmaps */
 	sev_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 8827d43e2684..7aaed8ee33cf 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1610,6 +1610,15 @@ struct kvm_sev_send_start {
 	__u32 session_len;
 };
 
+struct kvm_sev_send_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
  2020-05-05 21:14 ` [PATCH v8 01/18] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
  2020-05-05 21:14 ` [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
@ 2020-05-05 21:15 ` Ashish Kalra
  2020-05-05 22:51   ` Venu Busireddy
  2020-05-05 21:15 ` [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:15 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The command is used to finailize the encryption context created with
KVM_SEV_SEND_START command.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  8 +++++++
 arch/x86/kvm/svm/sev.c                        | 23 +++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index d0dfa5b54e4f..93884ec8918e 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -314,6 +314,14 @@ Returns: 0 on success, -negative on error
                 __u32 trans_len;
         };
 
+12. KVM_SEV_SEND_FINISH
+------------------------
+
+After completion of the migration flow, the KVM_SEV_SEND_FINISH command can be
+issued by the hypervisor to delete the encryption context.
+
+Returns: 0 on success, -negative on error
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7031b660f64d..4d3031c9fdcf 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1153,6 +1153,26 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_finish *data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_FINISH, data, &argp->error);
+
+	kfree(data);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1203,6 +1223,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SEND_UPDATE_DATA:
 		r = sev_send_update_data(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SEND_FINISH:
+		r = sev_send_finish(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (2 preceding siblings ...)
  2020-05-05 21:15 ` [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
@ 2020-05-05 21:15 ` Ashish Kalra
  2020-05-05 22:52   ` Venu Busireddy
  2020-05-05 21:15 ` [PATCH v8 05/18] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
                   ` (14 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:15 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The command is used to create the encryption context for an incoming
SEV guest. The encryption context can be later used by the hypervisor
to import the incoming data into the SEV guest memory space.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        | 29 +++++++
 arch/x86/kvm/svm/sev.c                        | 81 +++++++++++++++++++
 include/uapi/linux/kvm.h                      |  9 +++
 3 files changed, 119 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 93884ec8918e..337bf6a8a3ee 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -322,6 +322,35 @@ issued by the hypervisor to delete the encryption context.
 
 Returns: 0 on success, -negative on error
 
+13. KVM_SEV_RECEIVE_START
+------------------------
+
+The KVM_SEV_RECEIVE_START command is used for creating the memory encryption
+context for an incoming SEV guest. To create the encryption context, the user must
+provide a guest policy, the platform public Diffie-Hellman (PDH) key and session
+information.
+
+Parameters: struct  kvm_sev_receive_start (in/out)
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_receive_start {
+                __u32 handle;           /* if zero then firmware creates a new handle */
+                __u32 policy;           /* guest's policy */
+
+                __u64 pdh_uaddr;        /* userspace address pointing to the PDH key */
+                __u32 pdh_len;
+
+                __u64 session_uaddr;    /* userspace address which points to the guest session information */
+                __u32 session_len;
+        };
+
+On success, the 'handle' field contains a new handle and on error, a negative value.
+
+For more details, see SEV spec Section 6.12.
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 4d3031c9fdcf..b575aa8e27af 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1173,6 +1173,84 @@ static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_receive_start *start;
+	struct kvm_sev_receive_start params;
+	int *error = &argp->error;
+	void *session_data;
+	void *pdh_data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	/* Get parameter from the userspace */
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			sizeof(struct kvm_sev_receive_start)))
+		return -EFAULT;
+
+	/* some sanity checks */
+	if (!params.pdh_uaddr || !params.pdh_len ||
+	    !params.session_uaddr || !params.session_len)
+		return -EINVAL;
+
+	pdh_data = psp_copy_user_blob(params.pdh_uaddr, params.pdh_len);
+	if (IS_ERR(pdh_data))
+		return PTR_ERR(pdh_data);
+
+	session_data = psp_copy_user_blob(params.session_uaddr,
+			params.session_len);
+	if (IS_ERR(session_data)) {
+		ret = PTR_ERR(session_data);
+		goto e_free_pdh;
+	}
+
+	ret = -ENOMEM;
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		goto e_free_session;
+
+	start->handle = params.handle;
+	start->policy = params.policy;
+	start->pdh_cert_address = __psp_pa(pdh_data);
+	start->pdh_cert_len = params.pdh_len;
+	start->session_address = __psp_pa(session_data);
+	start->session_len = params.session_len;
+
+	/* create memory encryption context */
+	ret = __sev_issue_cmd(argp->sev_fd, SEV_CMD_RECEIVE_START, start,
+				error);
+	if (ret)
+		goto e_free;
+
+	/* Bind ASID to this guest */
+	ret = sev_bind_asid(kvm, start->handle, error);
+	if (ret)
+		goto e_free;
+
+	params.handle = start->handle;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data,
+			 &params, sizeof(struct kvm_sev_receive_start))) {
+		ret = -EFAULT;
+		sev_unbind_asid(kvm, start->handle);
+		goto e_free;
+	}
+
+	sev->handle = start->handle;
+	sev->fd = argp->sev_fd;
+
+e_free:
+	kfree(start);
+e_free_session:
+	kfree(session_data);
+e_free_pdh:
+	kfree(pdh_data);
+
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1226,6 +1304,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SEND_FINISH:
 		r = sev_send_finish(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_RECEIVE_START:
+		r = sev_receive_start(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 7aaed8ee33cf..24ac57151d53 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1619,6 +1619,15 @@ struct kvm_sev_send_update_data {
 	__u32 trans_len;
 };
 
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 policy;
+	__u64 pdh_uaddr;
+	__u32 pdh_len;
+	__u64 session_uaddr;
+	__u32 session_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 05/18] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (3 preceding siblings ...)
  2020-05-05 21:15 ` [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
@ 2020-05-05 21:15 ` Ashish Kalra
  2020-05-05 21:16 ` [PATCH v8 06/18] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:15 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The command is used for copying the incoming buffer into the
SEV guest memory space.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        | 24 ++++++
 arch/x86/kvm/svm/sev.c                        | 79 +++++++++++++++++++
 include/uapi/linux/kvm.h                      |  9 +++
 3 files changed, 112 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 337bf6a8a3ee..04333ec1b001 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -351,6 +351,30 @@ On success, the 'handle' field contains a new handle and on error, a negative va
 
 For more details, see SEV spec Section 6.12.
 
+14. KVM_SEV_RECEIVE_UPDATE_DATA
+----------------------------
+
+The KVM_SEV_RECEIVE_UPDATE_DATA command can be used by the hypervisor to copy
+the incoming buffers into the guest memory region with encryption context
+created during the KVM_SEV_RECEIVE_START.
+
+Parameters (in): struct kvm_sev_receive_update_data
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_launch_receive_update_data {
+                __u64 hdr_uaddr;        /* userspace address containing the packet header */
+                __u32 hdr_len;
+
+                __u64 guest_uaddr;      /* the destination guest memory region */
+                __u32 guest_len;
+
+                __u64 trans_uaddr;      /* the incoming buffer memory region  */
+                __u32 trans_len;
+        };
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index b575aa8e27af..165a612f317a 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1251,6 +1251,82 @@ static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct kvm_sev_receive_update_data params;
+	struct sev_data_receive_update_data *data;
+	void *hdr = NULL, *trans = NULL;
+	struct page **guest_page;
+	unsigned long n;
+	int ret, offset;
+
+	if (!sev_guest(kvm))
+		return -EINVAL;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			sizeof(struct kvm_sev_receive_update_data)))
+		return -EFAULT;
+
+	if (!params.hdr_uaddr || !params.hdr_len ||
+	    !params.guest_uaddr || !params.guest_len ||
+	    !params.trans_uaddr || !params.trans_len)
+		return -EINVAL;
+
+	/* Check if we are crossing the page boundary */
+	offset = params.guest_uaddr & (PAGE_SIZE - 1);
+	if ((params.guest_len + offset > PAGE_SIZE))
+		return -EINVAL;
+
+	hdr = psp_copy_user_blob(params.hdr_uaddr, params.hdr_len);
+	if (IS_ERR(hdr))
+		return PTR_ERR(hdr);
+
+	trans = psp_copy_user_blob(params.trans_uaddr, params.trans_len);
+	if (IS_ERR(trans)) {
+		ret = PTR_ERR(trans);
+		goto e_free_hdr;
+	}
+
+	ret = -ENOMEM;
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		goto e_free_trans;
+
+	data->hdr_address = __psp_pa(hdr);
+	data->hdr_len = params.hdr_len;
+	data->trans_address = __psp_pa(trans);
+	data->trans_len = params.trans_len;
+
+	/* Pin guest memory */
+	ret = -EFAULT;
+	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+				    PAGE_SIZE, &n, 0);
+	if (!guest_page)
+		goto e_free;
+
+	/* The RECEIVE_UPDATE_DATA command requires C-bit to be always set. */
+	data->guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) +
+				offset;
+	data->guest_address |= sev_me_mask;
+	data->guest_len = params.guest_len;
+	data->handle = sev->handle;
+
+	ret = sev_issue_cmd(kvm, SEV_CMD_RECEIVE_UPDATE_DATA, data,
+				&argp->error);
+
+	sev_unpin_memory(kvm, guest_page, n);
+
+e_free:
+	kfree(data);
+e_free_trans:
+	kfree(trans);
+e_free_hdr:
+	kfree(hdr);
+
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1307,6 +1383,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_RECEIVE_START:
 		r = sev_receive_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_RECEIVE_UPDATE_DATA:
+		r = sev_receive_update_data(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 24ac57151d53..0fe1d206d750 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1628,6 +1628,15 @@ struct kvm_sev_receive_start {
 	__u32 session_len;
 };
 
+struct kvm_sev_receive_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 06/18] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (4 preceding siblings ...)
  2020-05-05 21:15 ` [PATCH v8 05/18] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
@ 2020-05-05 21:16 ` Ashish Kalra
  2020-05-05 21:16 ` [PATCH v8 07/18] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:16 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The command finalize the guest receiving process and make the SEV guest
ready for the execution.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  8 +++++++
 arch/x86/kvm/svm/sev.c                        | 23 +++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 04333ec1b001..de5a00d86506 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -375,6 +375,14 @@ Returns: 0 on success, -negative on error
                 __u32 trans_len;
         };
 
+15. KVM_SEV_RECEIVE_FINISH
+------------------------
+
+After completion of the migration flow, the KVM_SEV_RECEIVE_FINISH command can be
+issued by the hypervisor to make the guest ready for execution.
+
+Returns: 0 on success, -negative on error
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 165a612f317a..698704defbcd 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1327,6 +1327,26 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_receive_finish *data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, data, &argp->error);
+
+	kfree(data);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1386,6 +1406,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_RECEIVE_UPDATE_DATA:
 		r = sev_receive_update_data(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_RECEIVE_FINISH:
+		r = sev_receive_finish(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 07/18] KVM: x86: Add AMD SEV specific Hypercall3
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (5 preceding siblings ...)
  2020-05-05 21:16 ` [PATCH v8 06/18] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
@ 2020-05-05 21:16 ` Ashish Kalra
  2020-05-05 21:17 ` [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:16 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

KVM hypercall framework relies on alternative framework to patch the
VMCALL -> VMMCALL on AMD platform. If a hypercall is made before
apply_alternative() is called then it defaults to VMCALL. The approach
works fine on non SEV guest. A VMCALL would causes #UD, and hypervisor
will be able to decode the instruction and do the right things. But
when SEV is active, guest memory is encrypted with guest key and
hypervisor will not be able to decode the instruction bytes.

Add SEV specific hypercall3, it unconditionally uses VMMCALL. The hypercall
will be used by the SEV guest to notify encrypted pages to the hypervisor.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/kvm_para.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 9b4df6eaa11a..6c09255633a4 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -84,6 +84,18 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
 	return ret;
 }
 
+static inline long kvm_sev_hypercall3(unsigned int nr, unsigned long p1,
+				      unsigned long p2, unsigned long p3)
+{
+	long ret;
+
+	asm volatile("vmmcall"
+		     : "=a"(ret)
+		     : "a"(nr), "b"(p1), "c"(p2), "d"(p3)
+		     : "memory");
+	return ret;
+}
+
 #ifdef CONFIG_KVM_GUEST
 bool kvm_para_available(void);
 unsigned int kvm_arch_para_features(void);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (6 preceding siblings ...)
  2020-05-05 21:16 ` [PATCH v8 07/18] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
@ 2020-05-05 21:17 ` Ashish Kalra
  2020-05-30  2:05   ` Steve Rutherford
  2020-05-05 21:17 ` [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl Ashish Kalra
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:17 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

This hypercall is used by the SEV guest to notify a change in the page
encryption status to the hypervisor. The hypercall should be invoked
only when the encryption attribute is changed from encrypted -> decrypted
and vice versa. By default all guest pages are considered encrypted.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 Documentation/virt/kvm/hypercalls.rst | 15 +++++
 arch/x86/include/asm/kvm_host.h       |  2 +
 arch/x86/kvm/svm/sev.c                | 90 +++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c                |  2 +
 arch/x86/kvm/svm/svm.h                |  4 ++
 arch/x86/kvm/vmx/vmx.c                |  1 +
 arch/x86/kvm/x86.c                    |  6 ++
 include/uapi/linux/kvm_para.h         |  1 +
 8 files changed, 121 insertions(+)

diff --git a/Documentation/virt/kvm/hypercalls.rst b/Documentation/virt/kvm/hypercalls.rst
index dbaf207e560d..ff5287e68e81 100644
--- a/Documentation/virt/kvm/hypercalls.rst
+++ b/Documentation/virt/kvm/hypercalls.rst
@@ -169,3 +169,18 @@ a0: destination APIC ID
 
 :Usage example: When sending a call-function IPI-many to vCPUs, yield if
 	        any of the IPI target vCPUs was preempted.
+
+
+8. KVM_HC_PAGE_ENC_STATUS
+-------------------------
+:Architecture: x86
+:Status: active
+:Purpose: Notify the encryption status changes in guest page table (SEV guest)
+
+a0: the guest physical address of the start page
+a1: the number of pages
+a2: encryption attribute
+
+   Where:
+	* 1: Encryption attribute is set
+	* 0: Encryption attribute is cleared
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 42a2d0d3984a..4a8ee22f4f5b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1254,6 +1254,8 @@ struct kvm_x86_ops {
 
 	bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
 	int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
+	int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
+				  unsigned long sz, unsigned long mode);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 698704defbcd..f088467708f0 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1347,6 +1347,93 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_resize_page_enc_bitmap(struct kvm *kvm, unsigned long new_size)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	unsigned long *map;
+	unsigned long sz;
+
+	if (sev->page_enc_bmap_size >= new_size)
+		return 0;
+
+	sz = ALIGN(new_size, BITS_PER_LONG) / 8;
+
+	map = vmalloc(sz);
+	if (!map) {
+		pr_err_once("Failed to allocate encrypted bitmap size %lx\n",
+				sz);
+		return -ENOMEM;
+	}
+
+	/* mark the page encrypted (by default) */
+	memset(map, 0xff, sz);
+
+	bitmap_copy(map, sev->page_enc_bmap, sev->page_enc_bmap_size);
+	kvfree(sev->page_enc_bmap);
+
+	sev->page_enc_bmap = map;
+	sev->page_enc_bmap_size = new_size;
+
+	return 0;
+}
+
+int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
+				  unsigned long npages, unsigned long enc)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	kvm_pfn_t pfn_start, pfn_end;
+	gfn_t gfn_start, gfn_end;
+
+	if (!sev_guest(kvm))
+		return -EINVAL;
+
+	if (!npages)
+		return 0;
+
+	gfn_start = gpa_to_gfn(gpa);
+	gfn_end = gfn_start + npages;
+
+	/* out of bound access error check */
+	if (gfn_end <= gfn_start)
+		return -EINVAL;
+
+	/* lets make sure that gpa exist in our memslot */
+	pfn_start = gfn_to_pfn(kvm, gfn_start);
+	pfn_end = gfn_to_pfn(kvm, gfn_end);
+
+	if (is_error_noslot_pfn(pfn_start) && !is_noslot_pfn(pfn_start)) {
+		/*
+		 * Allow guest MMIO range(s) to be added
+		 * to the page encryption bitmap.
+		 */
+		return -EINVAL;
+	}
+
+	if (is_error_noslot_pfn(pfn_end) && !is_noslot_pfn(pfn_end)) {
+		/*
+		 * Allow guest MMIO range(s) to be added
+		 * to the page encryption bitmap.
+		 */
+		return -EINVAL;
+	}
+
+	mutex_lock(&kvm->lock);
+
+	if (sev->page_enc_bmap_size < gfn_end)
+		goto unlock;
+
+	if (enc)
+		__bitmap_set(sev->page_enc_bmap, gfn_start,
+				gfn_end - gfn_start);
+	else
+		__bitmap_clear(sev->page_enc_bmap, gfn_start,
+				gfn_end - gfn_start);
+
+unlock:
+	mutex_unlock(&kvm->lock);
+	return 0;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1560,6 +1647,9 @@ void sev_vm_destroy(struct kvm *kvm)
 
 	sev_unbind_asid(kvm, sev->handle);
 	sev_asid_free(sev->asid);
+
+	kvfree(sev->page_enc_bmap);
+	sev->page_enc_bmap = NULL;
 }
 
 int __init sev_hardware_setup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2f379bacbb26..1013ef0f4ce2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4014,6 +4014,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.apic_init_signal_blocked = svm_apic_init_signal_blocked,
 
 	.check_nested_events = svm_check_nested_events,
+
+	.page_enc_status_hc = svm_page_enc_status_hc,
 };
 
 static struct kvm_x86_init_ops svm_init_ops __initdata = {
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index df3474f4fb02..6a562f5928a2 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -65,6 +65,8 @@ struct kvm_sev_info {
 	int fd;			/* SEV device fd */
 	unsigned long pages_locked; /* Number of pages locked */
 	struct list_head regions_list;  /* List of registered regions */
+	unsigned long *page_enc_bmap;
+	unsigned long page_enc_bmap_size;
 };
 
 struct kvm_svm {
@@ -400,6 +402,8 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 			       bool has_error_code, u32 error_code);
 int svm_check_nested_events(struct kvm_vcpu *vcpu);
 int nested_svm_exit_special(struct vcpu_svm *svm);
+int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
+				  unsigned long npages, unsigned long enc);
 
 /* avic.c */
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c2c6335a998c..7d01d3aa6461 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7838,6 +7838,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.nested_get_evmcs_version = NULL,
 	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
 	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
+	.page_enc_status_hc = NULL,
 };
 
 static __init int hardware_setup(void)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c5835f9cb9ad..5f5ddb5765e2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7605,6 +7605,12 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		kvm_sched_yield(vcpu->kvm, a0);
 		ret = 0;
 		break;
+	case KVM_HC_PAGE_ENC_STATUS:
+		ret = -KVM_ENOSYS;
+		if (kvm_x86_ops.page_enc_status_hc)
+			ret = kvm_x86_ops.page_enc_status_hc(vcpu->kvm,
+					a0, a1, a2);
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index 8b86609849b9..847b83b75dc8 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -29,6 +29,7 @@
 #define KVM_HC_CLOCK_PAIRING		9
 #define KVM_HC_SEND_IPI		10
 #define KVM_HC_SCHED_YIELD		11
+#define KVM_HC_PAGE_ENC_STATUS		12
 
 /*
  * hypercalls use architecture specific
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (7 preceding siblings ...)
  2020-05-05 21:17 ` [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
@ 2020-05-05 21:17 ` Ashish Kalra
  2020-05-30  2:05   ` Steve Rutherford
  2020-05-05 21:17 ` [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:17 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The ioctl can be used to retrieve page encryption bitmap for a given
gfn range.

Return the correct bitmap as per the number of pages being requested
by the user. Ensure that we only copy bmap->num_pages bytes in the
userspace buffer, if bmap->num_pages is not byte aligned we read
the trailing bits from the userspace and copy those bits as is.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 Documentation/virt/kvm/api.rst  | 27 +++++++++++++
 arch/x86/include/asm/kvm_host.h |  2 +
 arch/x86/kvm/svm/sev.c          | 70 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c          |  1 +
 arch/x86/kvm/svm/svm.h          |  1 +
 arch/x86/kvm/x86.c              | 12 ++++++
 include/uapi/linux/kvm.h        | 12 ++++++
 7 files changed, 125 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index efbbe570aa9b..ecad84086892 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4636,6 +4636,33 @@ This ioctl resets VCPU registers and control structures according to
 the clear cpu reset definition in the POP. However, the cpu is not put
 into ESA mode. This reset is a superset of the initial reset.
 
+4.125 KVM_GET_PAGE_ENC_BITMAP (vm ioctl)
+---------------------------------------
+
+:Capability: basic
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: struct kvm_page_enc_bitmap (in/out)
+:Returns: 0 on success, -1 on error
+
+/* for KVM_GET_PAGE_ENC_BITMAP */
+struct kvm_page_enc_bitmap {
+	__u64 start_gfn;
+	__u64 num_pages;
+	union {
+		void __user *enc_bitmap; /* one bit per page */
+		__u64 padding2;
+	};
+};
+
+The encrypted VMs have the concept of private and shared pages. The private
+pages are encrypted with the guest-specific key, while the shared pages may
+be encrypted with the hypervisor key. The KVM_GET_PAGE_ENC_BITMAP can
+be used to get the bitmap indicating whether the guest page is private
+or shared. The bitmap can be used during the guest migration. If the page
+is private then the userspace need to use SEV migration commands to transmit
+the page.
+
 
 4.125 KVM_S390_PV_COMMAND
 -------------------------
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4a8ee22f4f5b..9e428befb6a4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1256,6 +1256,8 @@ struct kvm_x86_ops {
 	int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
 	int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
 				  unsigned long sz, unsigned long mode);
+	int (*get_page_enc_bitmap)(struct kvm *kvm,
+				struct kvm_page_enc_bitmap *bmap);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index f088467708f0..387045902470 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1434,6 +1434,76 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
 	return 0;
 }
 
+int svm_get_page_enc_bitmap(struct kvm *kvm,
+				   struct kvm_page_enc_bitmap *bmap)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	unsigned long gfn_start, gfn_end;
+	unsigned long sz, i, sz_bytes;
+	unsigned long *bitmap;
+	int ret, n;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	gfn_start = bmap->start_gfn;
+	gfn_end = gfn_start + bmap->num_pages;
+
+	sz = ALIGN(bmap->num_pages, BITS_PER_LONG) / BITS_PER_BYTE;
+	bitmap = kmalloc(sz, GFP_KERNEL);
+	if (!bitmap)
+		return -ENOMEM;
+
+	/* by default all pages are marked encrypted */
+	memset(bitmap, 0xff, sz);
+
+	mutex_lock(&kvm->lock);
+	if (sev->page_enc_bmap) {
+		i = gfn_start;
+		for_each_clear_bit_from(i, sev->page_enc_bmap,
+				      min(sev->page_enc_bmap_size, gfn_end))
+			clear_bit(i - gfn_start, bitmap);
+	}
+	mutex_unlock(&kvm->lock);
+
+	ret = -EFAULT;
+
+	n = bmap->num_pages % BITS_PER_BYTE;
+	sz_bytes = ALIGN(bmap->num_pages, BITS_PER_BYTE) / BITS_PER_BYTE;
+
+	/*
+	 * Return the correct bitmap as per the number of pages being
+	 * requested by the user. Ensure that we only copy bmap->num_pages
+	 * bytes in the userspace buffer, if bmap->num_pages is not byte
+	 * aligned we read the trailing bits from the userspace and copy
+	 * those bits as is.
+	 */
+
+	if (n) {
+		unsigned char *bitmap_kernel = (unsigned char *)bitmap;
+		unsigned char bitmap_user;
+		unsigned long offset, mask;
+
+		offset = bmap->num_pages / BITS_PER_BYTE;
+		if (copy_from_user(&bitmap_user, bmap->enc_bitmap + offset,
+				sizeof(unsigned char)))
+			goto out;
+
+		mask = GENMASK(n - 1, 0);
+		bitmap_user &= ~mask;
+		bitmap_kernel[offset] &= mask;
+		bitmap_kernel[offset] |= bitmap_user;
+	}
+
+	if (copy_to_user(bmap->enc_bitmap, bitmap, sz_bytes))
+		goto out;
+
+	ret = 0;
+out:
+	kfree(bitmap);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1013ef0f4ce2..588709a9f68e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4016,6 +4016,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.check_nested_events = svm_check_nested_events,
 
 	.page_enc_status_hc = svm_page_enc_status_hc,
+	.get_page_enc_bitmap = svm_get_page_enc_bitmap,
 };
 
 static struct kvm_x86_init_ops svm_init_ops __initdata = {
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 6a562f5928a2..f087fa7b380c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -404,6 +404,7 @@ int svm_check_nested_events(struct kvm_vcpu *vcpu);
 int nested_svm_exit_special(struct vcpu_svm *svm);
 int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
 				  unsigned long npages, unsigned long enc);
+int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
 
 /* avic.c */
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5f5ddb5765e2..937797cfaf9a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5208,6 +5208,18 @@ long kvm_arch_vm_ioctl(struct file *filp,
 	case KVM_SET_PMU_EVENT_FILTER:
 		r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
 		break;
+	case KVM_GET_PAGE_ENC_BITMAP: {
+		struct kvm_page_enc_bitmap bitmap;
+
+		r = -EFAULT;
+		if (copy_from_user(&bitmap, argp, sizeof(bitmap)))
+			goto out;
+
+		r = -ENOTTY;
+		if (kvm_x86_ops.get_page_enc_bitmap)
+			r = kvm_x86_ops.get_page_enc_bitmap(kvm, &bitmap);
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 0fe1d206d750..af62f2afaa5d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -505,6 +505,16 @@ struct kvm_dirty_log {
 	};
 };
 
+/* for KVM_GET_PAGE_ENC_BITMAP */
+struct kvm_page_enc_bitmap {
+	__u64 start_gfn;
+	__u64 num_pages;
+	union {
+		void __user *enc_bitmap; /* one bit per page */
+		__u64 padding2;
+	};
+};
+
 /* for KVM_CLEAR_DIRTY_LOG */
 struct kvm_clear_dirty_log {
 	__u32 slot;
@@ -1518,6 +1528,8 @@ struct kvm_pv_cmd {
 /* Available with KVM_CAP_S390_PROTECTED */
 #define KVM_S390_PV_COMMAND		_IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
 
+#define KVM_GET_PAGE_ENC_BITMAP	_IOW(KVMIO, 0xc6, struct kvm_page_enc_bitmap)
+
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
 	/* Guest initialization commands */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (8 preceding siblings ...)
  2020-05-05 21:17 ` [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl Ashish Kalra
@ 2020-05-05 21:17 ` Ashish Kalra
  2020-05-30  2:06   ` Steve Rutherford
  2020-05-05 21:18 ` [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl Ashish Kalra
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:17 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

Invoke a hypercall when a memory region is changed from encrypted ->
decrypted and vice versa. Hypervisor needs to know the page encryption
status during the guest migration.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/paravirt.h       | 10 +++++
 arch/x86/include/asm/paravirt_types.h |  2 +
 arch/x86/kernel/paravirt.c            |  1 +
 arch/x86/mm/mem_encrypt.c             | 57 ++++++++++++++++++++++++++-
 arch/x86/mm/pat/set_memory.c          |  7 ++++
 5 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 694d8daf4983..8127b9c141bf 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -78,6 +78,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
 	PVOP_VCALL1(mmu.exit_mmap, mm);
 }
 
+static inline void page_encryption_changed(unsigned long vaddr, int npages,
+						bool enc)
+{
+	PVOP_VCALL3(mmu.page_encryption_changed, vaddr, npages, enc);
+}
+
 #ifdef CONFIG_PARAVIRT_XXL
 static inline void load_sp0(unsigned long sp0)
 {
@@ -946,6 +952,10 @@ static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
 static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
 {
 }
+
+static inline void page_encryption_changed(unsigned long vaddr, int npages, bool enc)
+{
+}
 #endif
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_PARAVIRT_H */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 732f62e04ddb..03bfd515c59c 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -215,6 +215,8 @@ struct pv_mmu_ops {
 
 	/* Hook for intercepting the destruction of an mm_struct. */
 	void (*exit_mmap)(struct mm_struct *mm);
+	void (*page_encryption_changed)(unsigned long vaddr, int npages,
+					bool enc);
 
 #ifdef CONFIG_PARAVIRT_XXL
 	struct paravirt_callee_save read_cr2;
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index c131ba4e70ef..840c02b23aeb 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -367,6 +367,7 @@ struct paravirt_patch_template pv_ops = {
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
 
 	.mmu.exit_mmap		= paravirt_nop,
+	.mmu.page_encryption_changed	= paravirt_nop,
 
 #ifdef CONFIG_PARAVIRT_XXL
 	.mmu.read_cr2		= __PV_IS_CALLEE_SAVE(native_read_cr2),
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index f4bd4b431ba1..c9800fa811f6 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -19,6 +19,7 @@
 #include <linux/kernel.h>
 #include <linux/bitops.h>
 #include <linux/dma-mapping.h>
+#include <linux/kvm_para.h>
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
@@ -29,6 +30,7 @@
 #include <asm/processor-flags.h>
 #include <asm/msr.h>
 #include <asm/cmdline.h>
+#include <asm/kvm_para.h>
 
 #include "mm_internal.h"
 
@@ -196,6 +198,47 @@ void __init sme_early_init(void)
 		swiotlb_force = SWIOTLB_FORCE;
 }
 
+static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
+					bool enc)
+{
+	unsigned long sz = npages << PAGE_SHIFT;
+	unsigned long vaddr_end, vaddr_next;
+
+	vaddr_end = vaddr + sz;
+
+	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
+		int psize, pmask, level;
+		unsigned long pfn;
+		pte_t *kpte;
+
+		kpte = lookup_address(vaddr, &level);
+		if (!kpte || pte_none(*kpte))
+			return;
+
+		switch (level) {
+		case PG_LEVEL_4K:
+			pfn = pte_pfn(*kpte);
+			break;
+		case PG_LEVEL_2M:
+			pfn = pmd_pfn(*(pmd_t *)kpte);
+			break;
+		case PG_LEVEL_1G:
+			pfn = pud_pfn(*(pud_t *)kpte);
+			break;
+		default:
+			return;
+		}
+
+		psize = page_level_size(level);
+		pmask = page_level_mask(level);
+
+		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
+				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
+
+		vaddr_next = (vaddr & pmask) + psize;
+	}
+}
+
 static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 {
 	pgprot_t old_prot, new_prot;
@@ -253,12 +296,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 static int __init early_set_memory_enc_dec(unsigned long vaddr,
 					   unsigned long size, bool enc)
 {
-	unsigned long vaddr_end, vaddr_next;
+	unsigned long vaddr_end, vaddr_next, start;
 	unsigned long psize, pmask;
 	int split_page_size_mask;
 	int level, ret;
 	pte_t *kpte;
 
+	start = vaddr;
 	vaddr_next = vaddr;
 	vaddr_end = vaddr + size;
 
@@ -313,6 +357,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
 
 	ret = 0;
 
+	set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
+					enc);
 out:
 	__flush_tlb_all();
 	return ret;
@@ -451,6 +497,15 @@ void __init mem_encrypt_init(void)
 	if (sev_active())
 		static_branch_enable(&sev_enable_key);
 
+#ifdef CONFIG_PARAVIRT
+	/*
+	 * With SEV, we need to make a hypercall when page encryption state is
+	 * changed.
+	 */
+	if (sev_active())
+		pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
+#endif
+
 	pr_info("AMD %s active\n",
 		sev_active() ? "Secure Encrypted Virtualization (SEV)"
 			     : "Secure Memory Encryption (SME)");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 59eca6a94ce7..9aaf1b6f5a1b 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -27,6 +27,7 @@
 #include <asm/proto.h>
 #include <asm/memtype.h>
 #include <asm/set_memory.h>
+#include <asm/paravirt.h>
 
 #include "../mm_internal.h"
 
@@ -2003,6 +2004,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
 	 */
 	cpa_flush(&cpa, 0);
 
+	/* Notify hypervisor that a given memory range is mapped encrypted
+	 * or decrypted. The hypervisor will use this information during the
+	 * VM migration.
+	 */
+	page_encryption_changed(addr, numpages, enc);
+
 	return ret;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (9 preceding siblings ...)
  2020-05-05 21:17 ` [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
@ 2020-05-05 21:18 ` Ashish Kalra
  2020-05-30  2:06   ` Steve Rutherford
  2020-05-05 21:18 ` [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap Ashish Kalra
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:18 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Brijesh Singh <Brijesh.Singh@amd.com>

The ioctl can be used to set page encryption bitmap for an
incoming guest.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 Documentation/virt/kvm/api.rst  | 44 +++++++++++++++++++++++++++++
 arch/x86/include/asm/kvm_host.h |  2 ++
 arch/x86/kvm/svm/sev.c          | 50 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c          |  1 +
 arch/x86/kvm/svm/svm.h          |  1 +
 arch/x86/kvm/x86.c              | 12 ++++++++
 include/uapi/linux/kvm.h        |  1 +
 7 files changed, 111 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index ecad84086892..fa70017ee693 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4663,6 +4663,28 @@ or shared. The bitmap can be used during the guest migration. If the page
 is private then the userspace need to use SEV migration commands to transmit
 the page.
 
+4.126 KVM_SET_PAGE_ENC_BITMAP (vm ioctl)
+---------------------------------------
+
+:Capability: basic
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: struct kvm_page_enc_bitmap (in/out)
+:Returns: 0 on success, -1 on error
+
+/* for KVM_SET_PAGE_ENC_BITMAP */
+struct kvm_page_enc_bitmap {
+	__u64 start_gfn;
+	__u64 num_pages;
+	union {
+		void __user *enc_bitmap; /* one bit per page */
+		__u64 padding2;
+	};
+};
+
+During the guest live migration the outgoing guest exports its page encryption
+bitmap, the KVM_SET_PAGE_ENC_BITMAP can be used to build the page encryption
+bitmap for an incoming guest.
 
 4.125 KVM_S390_PV_COMMAND
 -------------------------
@@ -4717,6 +4739,28 @@ KVM_PV_VM_VERIFY
   Verify the integrity of the unpacked image. Only if this succeeds,
   KVM is allowed to start protected VCPUs.
 
+4.126 KVM_SET_PAGE_ENC_BITMAP (vm ioctl)
+---------------------------------------
+
+:Capability: basic
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: struct kvm_page_enc_bitmap (in/out)
+:Returns: 0 on success, -1 on error
+
+/* for KVM_SET_PAGE_ENC_BITMAP */
+struct kvm_page_enc_bitmap {
+	__u64 start_gfn;
+	__u64 num_pages;
+	union {
+		void __user *enc_bitmap; /* one bit per page */
+		__u64 padding2;
+	};
+};
+
+During the guest live migration the outgoing guest exports its page encryption
+bitmap, the KVM_SET_PAGE_ENC_BITMAP can be used to build the page encryption
+bitmap for an incoming guest.
 
 5. The kvm_run structure
 ========================
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9e428befb6a4..fc74144d5ab0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1258,6 +1258,8 @@ struct kvm_x86_ops {
 				  unsigned long sz, unsigned long mode);
 	int (*get_page_enc_bitmap)(struct kvm *kvm,
 				struct kvm_page_enc_bitmap *bmap);
+	int (*set_page_enc_bitmap)(struct kvm *kvm,
+				struct kvm_page_enc_bitmap *bmap);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 387045902470..30efc1068707 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1504,6 +1504,56 @@ int svm_get_page_enc_bitmap(struct kvm *kvm,
 	return ret;
 }
 
+int svm_set_page_enc_bitmap(struct kvm *kvm,
+				   struct kvm_page_enc_bitmap *bmap)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	unsigned long gfn_start, gfn_end;
+	unsigned long *bitmap;
+	unsigned long sz;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+	/* special case of resetting the complete bitmap */
+	if (!bmap->enc_bitmap) {
+		mutex_lock(&kvm->lock);
+		/* by default all pages are marked encrypted */
+		if (sev->page_enc_bmap_size)
+			bitmap_fill(sev->page_enc_bmap,
+				    sev->page_enc_bmap_size);
+		mutex_unlock(&kvm->lock);
+		return 0;
+	}
+
+	gfn_start = bmap->start_gfn;
+	gfn_end = gfn_start + bmap->num_pages;
+
+	sz = ALIGN(bmap->num_pages, BITS_PER_LONG) / 8;
+	bitmap = kmalloc(sz, GFP_KERNEL);
+	if (!bitmap)
+		return -ENOMEM;
+
+	ret = -EFAULT;
+	if (copy_from_user(bitmap, bmap->enc_bitmap, sz))
+		goto out;
+
+	mutex_lock(&kvm->lock);
+	ret = sev_resize_page_enc_bitmap(kvm, gfn_end);
+	if (ret)
+		goto unlock;
+
+	bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
+		    (gfn_end - gfn_start));
+
+	ret = 0;
+unlock:
+	mutex_unlock(&kvm->lock);
+out:
+	kfree(bitmap);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 588709a9f68e..501e82f5593c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4017,6 +4017,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 
 	.page_enc_status_hc = svm_page_enc_status_hc,
 	.get_page_enc_bitmap = svm_get_page_enc_bitmap,
+	.set_page_enc_bitmap = svm_set_page_enc_bitmap,
 };
 
 static struct kvm_x86_init_ops svm_init_ops __initdata = {
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index f087fa7b380c..2ebdcce50312 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -405,6 +405,7 @@ int nested_svm_exit_special(struct vcpu_svm *svm);
 int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
 				  unsigned long npages, unsigned long enc);
 int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
+int svm_set_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
 
 /* avic.c */
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 937797cfaf9a..c4166d7a0493 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5220,6 +5220,18 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			r = kvm_x86_ops.get_page_enc_bitmap(kvm, &bitmap);
 		break;
 	}
+	case KVM_SET_PAGE_ENC_BITMAP: {
+		struct kvm_page_enc_bitmap bitmap;
+
+		r = -EFAULT;
+		if (copy_from_user(&bitmap, argp, sizeof(bitmap)))
+			goto out;
+
+		r = -ENOTTY;
+		if (kvm_x86_ops.set_page_enc_bitmap)
+			r = kvm_x86_ops.set_page_enc_bitmap(kvm, &bitmap);
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index af62f2afaa5d..2798b17484d0 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1529,6 +1529,7 @@ struct kvm_pv_cmd {
 #define KVM_S390_PV_COMMAND		_IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
 
 #define KVM_GET_PAGE_ENC_BITMAP	_IOW(KVMIO, 0xc6, struct kvm_page_enc_bitmap)
+#define KVM_SET_PAGE_ENC_BITMAP	_IOW(KVMIO, 0xc7, struct kvm_page_enc_bitmap)
 
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap.
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (10 preceding siblings ...)
  2020-05-05 21:18 ` [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl Ashish Kalra
@ 2020-05-05 21:18 ` Ashish Kalra
  2020-05-30  2:07   ` Steve Rutherford
  2020-12-04 11:08   ` Paolo Bonzini
  2020-05-05 21:19 ` [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
                   ` (6 subsequent siblings)
  18 siblings, 2 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:18 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

Add support for static allocation of the unified Page encryption bitmap by
extending kvm_arch_commit_memory_region() callack to add svm specific x86_ops
which can read the userspace provided memory region/memslots and calculate
the amount of guest RAM managed by the KVM and grow the bitmap based
on that information, i.e. the highest guest PA that is mapped by a memslot.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm/sev.c          | 35 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c          |  1 +
 arch/x86/kvm/svm/svm.h          |  1 +
 arch/x86/kvm/x86.c              |  5 +++++
 5 files changed, 43 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index fc74144d5ab0..b573ea85b57e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1254,6 +1254,7 @@ struct kvm_x86_ops {
 
 	bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
 	int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
+	void (*commit_memory_region)(struct kvm *kvm, enum kvm_mr_change change);
 	int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
 				  unsigned long sz, unsigned long mode);
 	int (*get_page_enc_bitmap)(struct kvm *kvm,
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 30efc1068707..c0d7043a0627 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1377,6 +1377,41 @@ static int sev_resize_page_enc_bitmap(struct kvm *kvm, unsigned long new_size)
 	return 0;
 }
 
+void svm_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	gfn_t start, end = 0;
+
+	spin_lock(&kvm->mmu_lock);
+	if (change == KVM_MR_CREATE) {
+		slots = kvm_memslots(kvm);
+		kvm_for_each_memslot(memslot, slots) {
+			start = memslot->base_gfn;
+			end = memslot->base_gfn + memslot->npages;
+			/*
+			 * KVM memslots is a sorted list, starting with
+			 * the highest mapped guest PA, so pick the topmost
+			 * valid guest PA.
+			 */
+			if (memslot->npages)
+				break;
+		}
+	}
+	spin_unlock(&kvm->mmu_lock);
+
+	if (end) {
+		/*
+		 * NORE: This callback is invoked in vm ioctl
+		 * set_user_memory_region, hence we can use a
+		 * mutex here.
+		 */
+		mutex_lock(&kvm->lock);
+		sev_resize_page_enc_bitmap(kvm, end);
+		mutex_unlock(&kvm->lock);
+	}
+}
+
 int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
 				  unsigned long npages, unsigned long enc)
 {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 501e82f5593c..442adbbb0641 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4015,6 +4015,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 
 	.check_nested_events = svm_check_nested_events,
 
+	.commit_memory_region = svm_commit_memory_region,
 	.page_enc_status_hc = svm_page_enc_status_hc,
 	.get_page_enc_bitmap = svm_get_page_enc_bitmap,
 	.set_page_enc_bitmap = svm_set_page_enc_bitmap,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 2ebdcce50312..fd99e0a5417a 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -406,6 +406,7 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
 				  unsigned long npages, unsigned long enc);
 int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
 int svm_set_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
+void svm_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change);
 
 /* avic.c */
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c4166d7a0493..8938de868d42 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10133,6 +10133,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 		kvm_mmu_change_mmu_pages(kvm,
 				kvm_mmu_calculate_default_mmu_pages(kvm));
 
+	if (change == KVM_MR_CREATE || change == KVM_MR_DELETE) {
+		if (kvm_x86_ops.commit_memory_region)
+			kvm_x86_ops.commit_memory_region(kvm, change);
+	}
+
 	/*
 	 * Dirty logging tracks sptes in 4k granularity, meaning that large
 	 * sptes have to be split.  If live migration is successful, the guest
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (11 preceding siblings ...)
  2020-05-05 21:18 ` [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap Ashish Kalra
@ 2020-05-05 21:19 ` Ashish Kalra
  2020-05-30  2:07   ` Steve Rutherford
  2020-12-04 11:20   ` Paolo Bonzini
  2020-05-05 21:20 ` [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
                   ` (5 subsequent siblings)
  18 siblings, 2 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:19 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
for host-side support for SEV live migration. Also add a new custom
MSR_KVM_SEV_LIVE_MIG_EN for guest to enable the SEV live migration
feature.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 Documentation/virt/kvm/cpuid.rst     |  5 +++++
 Documentation/virt/kvm/msr.rst       | 10 ++++++++++
 arch/x86/include/uapi/asm/kvm_para.h |  5 +++++
 arch/x86/kvm/svm/sev.c               | 14 ++++++++++++++
 arch/x86/kvm/svm/svm.c               | 16 ++++++++++++++++
 arch/x86/kvm/svm/svm.h               |  2 ++
 6 files changed, 52 insertions(+)

diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
index 01b081f6e7ea..0514523e00cd 100644
--- a/Documentation/virt/kvm/cpuid.rst
+++ b/Documentation/virt/kvm/cpuid.rst
@@ -86,6 +86,11 @@ KVM_FEATURE_PV_SCHED_YIELD        13          guest checks this feature bit
                                               before using paravirtualized
                                               sched yield.
 
+KVM_FEATURE_SEV_LIVE_MIGRATION    14          guest checks this feature bit before
+                                              using the page encryption state
+                                              hypercall to notify the page state
+                                              change
+
 KVM_FEATURE_CLOCSOURCE_STABLE_BIT 24          host will warn if no guest-side
                                               per-cpu warps are expeced in
                                               kvmclock
diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
index 33892036672d..7cd7786bbb03 100644
--- a/Documentation/virt/kvm/msr.rst
+++ b/Documentation/virt/kvm/msr.rst
@@ -319,3 +319,13 @@ data:
 
 	KVM guests can request the host not to poll on HLT, for example if
 	they are performing polling themselves.
+
+MSR_KVM_SEV_LIVE_MIG_EN:
+        0x4b564d06
+
+	Control SEV Live Migration features.
+
+data:
+        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature.
+        Bit 1 enables (1) or disables (0) support for SEV Live Migration extensions.
+        All other bits are reserved.
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 2a8e0b6b9805..d9d4953b42ad 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -31,6 +31,7 @@
 #define KVM_FEATURE_PV_SEND_IPI	11
 #define KVM_FEATURE_POLL_CONTROL	12
 #define KVM_FEATURE_PV_SCHED_YIELD	13
+#define KVM_FEATURE_SEV_LIVE_MIGRATION	14
 
 #define KVM_HINTS_REALTIME      0
 
@@ -50,6 +51,7 @@
 #define MSR_KVM_STEAL_TIME  0x4b564d03
 #define MSR_KVM_PV_EOI_EN      0x4b564d04
 #define MSR_KVM_POLL_CONTROL	0x4b564d05
+#define MSR_KVM_SEV_LIVE_MIG_EN	0x4b564d06
 
 struct kvm_steal_time {
 	__u64 steal;
@@ -122,4 +124,7 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#define KVM_SEV_LIVE_MIGRATION_ENABLED			(1 << 0)
+#define KVM_SEV_LIVE_MIGRATION_EXTENSIONS_SUPPORTED	(1 << 1)
+
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c0d7043a0627..6f69c3a47583 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1469,6 +1469,17 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
 	return 0;
 }
 
+void sev_update_migration_flags(struct kvm *kvm, u64 data)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+
+	if (!sev_guest(kvm))
+		return;
+
+	if (data & KVM_SEV_LIVE_MIGRATION_ENABLED)
+		sev->live_migration_enabled = true;
+}
+
 int svm_get_page_enc_bitmap(struct kvm *kvm,
 				   struct kvm_page_enc_bitmap *bmap)
 {
@@ -1481,6 +1492,9 @@ int svm_get_page_enc_bitmap(struct kvm *kvm,
 	if (!sev_guest(kvm))
 		return -ENOTTY;
 
+	if (!sev->live_migration_enabled)
+		return -EINVAL;
+
 	gfn_start = bmap->start_gfn;
 	gfn_end = gfn_start + bmap->num_pages;
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 442adbbb0641..a99f5457f244 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2633,6 +2633,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		svm->msr_decfg = data;
 		break;
 	}
+	case MSR_KVM_SEV_LIVE_MIG_EN:
+		sev_update_migration_flags(vcpu->kvm, data);
+		break;
 	case MSR_IA32_APICBASE:
 		if (kvm_vcpu_apicv_active(vcpu))
 			avic_update_vapic_bar(to_svm(vcpu), data);
@@ -3493,6 +3496,19 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 	svm->nrips_enabled = kvm_cpu_cap_has(X86_FEATURE_NRIPS) &&
 			     guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
 
+        /*
+         * If SEV guest then enable the Live migration feature.
+         */
+        if (sev_guest(vcpu->kvm)) {
+              struct kvm_cpuid_entry2 *best;
+
+              best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+              if (!best)
+                      return;
+
+              best->eax |= (1 << KVM_FEATURE_SEV_LIVE_MIGRATION);
+        }
+
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index fd99e0a5417a..77f132a6fead 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -65,6 +65,7 @@ struct kvm_sev_info {
 	int fd;			/* SEV device fd */
 	unsigned long pages_locked; /* Number of pages locked */
 	struct list_head regions_list;  /* List of registered regions */
+	bool live_migration_enabled;
 	unsigned long *page_enc_bmap;
 	unsigned long page_enc_bmap_size;
 };
@@ -494,5 +495,6 @@ int svm_unregister_enc_region(struct kvm *kvm,
 void pre_sev_run(struct vcpu_svm *svm, int cpu);
 int __init sev_hardware_setup(void);
 void sev_hardware_teardown(void);
+void sev_update_migration_flags(struct kvm *kvm, u64 data);
 
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID.
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (12 preceding siblings ...)
  2020-05-05 21:19 ` [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
@ 2020-05-05 21:20 ` Ashish Kalra
  2020-05-30  2:07   ` Steve Rutherford
  2020-05-05 21:20 ` [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
                   ` (4 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:20 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

Introduce a new AMD Memory Encryption GUID which is currently
used for defining a new UEFI enviroment variable which indicates
UEFI/OVMF support for the SEV live migration feature. This variable
is setup when UEFI/OVMF detects host/hypervisor support for SEV
live migration and later this variable is read by the kernel using
EFI runtime services to verify if OVMF supports the live migration
feature.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 include/linux/efi.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/efi.h b/include/linux/efi.h
index 251f1f783cdf..2efb42ccf3a8 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -358,6 +358,7 @@ void efi_native_runtime_setup(void);
 
 /* OEM GUIDs */
 #define DELLEMC_EFI_RCI2_TABLE_GUID		EFI_GUID(0x2d9f28a2, 0xa886, 0x456a,  0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55)
+#define MEM_ENCRYPT_GUID			EFI_GUID(0x0cf29b71, 0x9e51, 0x433a,  0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75)
 
 typedef struct {
 	efi_guid_t guid;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature.
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (13 preceding siblings ...)
  2020-05-05 21:20 ` [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
@ 2020-05-05 21:20 ` Ashish Kalra
  2020-05-30  2:08   ` Steve Rutherford
  2020-05-05 21:20 ` [PATCH v8 16/18] KVM: x86: Mark _bss_decrypted section variables as decrypted in page encryption bitmap Ashish Kalra
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:20 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

The guest support for detecting and enabling SEV Live migration
feature uses the following logic :

 - kvm_init_plaform() checks if its booted under the EFI

   - If not EFI,

     i) check for the KVM_FEATURE_CPUID

     ii) if CPUID reports that migration is support then issue wrmsrl
         to enable the SEV migration support

   - If EFI,

     i) Check the KVM_FEATURE_CPUID.

     ii) If CPUID reports that migration is supported, then reads the UEFI enviroment variable which
         indicates OVMF support for live migration.

     iii) If variable is set then wrmsr to enable the SEV migration support.

The EFI live migration check is done using a late_initcall() callback.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h | 11 ++++++
 arch/x86/kernel/kvm.c              | 62 ++++++++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c          | 11 ++++++
 3 files changed, 84 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 848ce43b9040..d10e92ae5ca1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 
 extern u64 sme_me_mask;
 extern bool sev_enabled;
+extern bool sev_live_mig_enabled;
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
@@ -42,6 +43,8 @@ void __init sme_enable(struct boot_params *bp);
 
 int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
 int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
+void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
+					    bool enc);
 
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void);
@@ -55,6 +58,7 @@ bool sev_active(void);
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
 #define sme_me_mask	0ULL
+#define sev_live_mig_enabled	false
 
 static inline void __init sme_early_encrypt(resource_size_t paddr,
 					    unsigned long size) { }
@@ -76,6 +80,8 @@ static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
 static inline int __init
 early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
+static inline void __init
+early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
 
 #define __bss_decrypted
 
@@ -102,6 +108,11 @@ static inline u64 sme_get_me_mask(void)
 	return sme_me_mask;
 }
 
+static inline bool sev_live_migration_enabled(void)
+{
+	return sev_live_mig_enabled;
+}
+
 #endif	/* __ASSEMBLY__ */
 
 #endif	/* __X86_MEM_ENCRYPT_H__ */
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 6efe0410fb72..4b29815de873 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -24,6 +24,7 @@
 #include <linux/debugfs.h>
 #include <linux/nmi.h>
 #include <linux/swait.h>
+#include <linux/efi.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -403,6 +404,53 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
 	early_set_memory_decrypted((unsigned long) ptr, size);
 }
 
+#ifdef CONFIG_EFI
+static bool setup_kvm_sev_migration(void)
+{
+	efi_char16_t efi_Sev_Live_Mig_support_name[] = L"SevLiveMigrationEnabled";
+	efi_guid_t efi_variable_guid = MEM_ENCRYPT_GUID;
+	efi_status_t status;
+	unsigned long size;
+	bool enabled;
+
+	if (!sev_live_migration_enabled())
+		return false;
+
+	size = sizeof(enabled);
+
+	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
+		pr_info("setup_kvm_sev_migration: no efi\n");
+		return false;
+	}
+
+	/* Get variable contents into buffer */
+	status = efi.get_variable(efi_Sev_Live_Mig_support_name,
+				  &efi_variable_guid, NULL, &size, &enabled);
+
+	if (status == EFI_NOT_FOUND) {
+		pr_info("setup_kvm_sev_migration: variable not found\n");
+		return false;
+	}
+
+	if (status != EFI_SUCCESS) {
+		pr_info("setup_kvm_sev_migration: get_variable fail\n");
+		return false;
+	}
+
+	if (enabled == 0) {
+		pr_info("setup_kvm_sev_migration: live migration disabled in OVMF\n");
+		return false;
+	}
+
+	pr_info("setup_kvm_sev_migration: live migration enabled in OVMF\n");
+	wrmsrl(MSR_KVM_SEV_LIVE_MIG_EN, KVM_SEV_LIVE_MIGRATION_ENABLED);
+
+	return true;
+}
+
+late_initcall(setup_kvm_sev_migration);
+#endif
+
 /*
  * Iterate through all possible CPUs and map the memory region pointed
  * by apf_reason, steal_time and kvm_apic_eoi as decrypted at once.
@@ -725,6 +773,20 @@ static void __init kvm_apic_init(void)
 
 static void __init kvm_init_platform(void)
 {
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	if (sev_active() &&
+	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
+		printk(KERN_INFO "KVM enable live migration\n");
+		sev_live_mig_enabled = true;
+		/*
+		 * If not booted using EFI, enable Live migration support.
+		 */
+		if (!efi_enabled(EFI_BOOT))
+			wrmsrl(MSR_KVM_SEV_LIVE_MIG_EN,
+			       KVM_SEV_LIVE_MIGRATION_ENABLED);
+	} else
+		printk(KERN_INFO "KVM enable live migration feature unsupported\n");
+#endif
 	kvmclock_init();
 	x86_platform.apic_post_init = kvm_apic_init;
 }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index c9800fa811f6..f54be71bc75f 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -46,6 +46,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
 
 bool sev_enabled __section(.data);
 
+bool sev_live_mig_enabled __section(.data);
+
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
 
@@ -204,6 +206,9 @@ static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
 	unsigned long sz = npages << PAGE_SHIFT;
 	unsigned long vaddr_end, vaddr_next;
 
+	if (!sev_live_migration_enabled())
+		return;
+
 	vaddr_end = vaddr + sz;
 
 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
@@ -374,6 +379,12 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
 	return early_set_memory_enc_dec(vaddr, size, true);
 }
 
+void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
+					bool enc)
+{
+	set_memory_enc_dec_hypercall(vaddr, npages, enc);
+}
+
 /*
  * SME and SEV are very similar but they are not the same, so there are
  * times that the kernel will need to distinguish between SME and SEV. The
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 16/18] KVM: x86: Mark _bss_decrypted section variables as decrypted in page encryption bitmap.
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (14 preceding siblings ...)
  2020-05-05 21:20 ` [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
@ 2020-05-05 21:20 ` Ashish Kalra
  2020-05-30  2:08   ` Steve Rutherford
  2020-05-05 21:21   ` Ashish Kalra
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:20 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

Ensure that _bss_decrypted section variables such as hv_clock_boot and
wall_clock are marked as decrypted in the page encryption bitmap if
sev liv migration is supported.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/kernel/kvmclock.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 34b18f6eeb2c..65777bf1218d 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -334,6 +334,18 @@ void __init kvmclock_init(void)
 	pr_info("kvm-clock: Using msrs %x and %x",
 		msr_kvm_system_time, msr_kvm_wall_clock);
 
+	if (sev_live_migration_enabled()) {
+		unsigned long nr_pages;
+		/*
+		 * sizeof(hv_clock_boot) is already PAGE_SIZE aligned
+		 */
+		early_set_mem_enc_dec_hypercall((unsigned long)hv_clock_boot,
+						1, 0);
+		nr_pages = DIV_ROUND_UP(sizeof(wall_clock), PAGE_SIZE);
+		early_set_mem_enc_dec_hypercall((unsigned long)&wall_clock,
+						nr_pages, 0);
+	}
+
 	this_cpu_write(hv_clock_per_cpu, &hv_clock_boot[0]);
 	kvm_register_clock("primary cpu clock");
 	pvclock_set_pvti_cpu0_va(hv_clock_boot);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 17/18] KVM: x86: Add kexec support for SEV Live Migration.
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
@ 2020-05-05 21:21   ` Ashish Kalra
  2020-05-05 21:14 ` [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
                     ` (17 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:21 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh, kexec

From: Ashish Kalra <ashish.kalra@amd.com>

Reset the host's page encryption bitmap related to kernel
specific page encryption status settings before we load a
new kernel by kexec. We cannot reset the complete
page encryption bitmap here as we need to retain the
UEFI/OVMF firmware specific settings.

The host's page encryption bitmap is maintained for the
guest to keep the encrypted/decrypted state of the guest pages,
therefore we need to explicitly mark all shared pages as
encrypted again before rebooting into the new guest kernel.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/kernel/kvm.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 4b29815de873..a8bc30d5b15b 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -35,6 +35,7 @@
 #include <asm/hypervisor.h>
 #include <asm/tlb.h>
 #include <asm/cpuidle_haltpoll.h>
+#include <asm/e820/api.h>
 
 static int kvmapf = 1;
 
@@ -358,6 +359,33 @@ static void kvm_pv_guest_cpu_reboot(void *unused)
 	 */
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
 		wrmsrl(MSR_KVM_PV_EOI_EN, 0);
+	/*
+	 * Reset the host's page encryption bitmap related to kernel
+	 * specific page encryption status settings before we load a
+	 * new kernel by kexec. NOTE: We cannot reset the complete
+	 * page encryption bitmap here as we need to retain the
+	 * UEFI/OVMF firmware specific settings.
+	 */
+	if (sev_live_migration_enabled() & (smp_processor_id() == 0)) {
+		int i;
+		unsigned long nr_pages;
+
+		for (i = 0; i < e820_table->nr_entries; i++) {
+			struct e820_entry *entry = &e820_table->entries[i];
+			unsigned long start_pfn;
+			unsigned long end_pfn;
+
+			if (entry->type != E820_TYPE_RAM)
+				continue;
+
+			start_pfn = entry->addr >> PAGE_SHIFT;
+			end_pfn = (entry->addr + entry->size) >> PAGE_SHIFT;
+			nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
+
+			kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
+					   entry->addr, nr_pages, 1);
+		}
+	}
 	kvm_pv_disable_apf();
 	kvm_disable_steal_time();
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 17/18] KVM: x86: Add kexec support for SEV Live Migration.
@ 2020-05-05 21:21   ` Ashish Kalra
  0 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:21 UTC (permalink / raw)
  To: pbonzini
  Cc: Thomas.Lendacky, brijesh.singh, kvm, srutherford, joro, x86,
	kexec, linux-kernel, mingo, hpa, rientjes, tglx, bp,
	venu.busireddy

From: Ashish Kalra <ashish.kalra@amd.com>

Reset the host's page encryption bitmap related to kernel
specific page encryption status settings before we load a
new kernel by kexec. We cannot reset the complete
page encryption bitmap here as we need to retain the
UEFI/OVMF firmware specific settings.

The host's page encryption bitmap is maintained for the
guest to keep the encrypted/decrypted state of the guest pages,
therefore we need to explicitly mark all shared pages as
encrypted again before rebooting into the new guest kernel.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/kernel/kvm.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 4b29815de873..a8bc30d5b15b 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -35,6 +35,7 @@
 #include <asm/hypervisor.h>
 #include <asm/tlb.h>
 #include <asm/cpuidle_haltpoll.h>
+#include <asm/e820/api.h>
 
 static int kvmapf = 1;
 
@@ -358,6 +359,33 @@ static void kvm_pv_guest_cpu_reboot(void *unused)
 	 */
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
 		wrmsrl(MSR_KVM_PV_EOI_EN, 0);
+	/*
+	 * Reset the host's page encryption bitmap related to kernel
+	 * specific page encryption status settings before we load a
+	 * new kernel by kexec. NOTE: We cannot reset the complete
+	 * page encryption bitmap here as we need to retain the
+	 * UEFI/OVMF firmware specific settings.
+	 */
+	if (sev_live_migration_enabled() & (smp_processor_id() == 0)) {
+		int i;
+		unsigned long nr_pages;
+
+		for (i = 0; i < e820_table->nr_entries; i++) {
+			struct e820_entry *entry = &e820_table->entries[i];
+			unsigned long start_pfn;
+			unsigned long end_pfn;
+
+			if (entry->type != E820_TYPE_RAM)
+				continue;
+
+			start_pfn = entry->addr >> PAGE_SHIFT;
+			end_pfn = (entry->addr + entry->size) >> PAGE_SHIFT;
+			nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
+
+			kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
+					   entry->addr, nr_pages, 1);
+		}
+	}
 	kvm_pv_disable_apf();
 	kvm_disable_steal_time();
 }
-- 
2.17.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s).
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (16 preceding siblings ...)
  2020-05-05 21:21   ` Ashish Kalra
@ 2020-05-05 21:22 ` Ashish Kalra
  2020-05-30  2:09   ` Steve Rutherford
                     ` (2 more replies)
  2020-05-18 19:07 ` [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
  18 siblings, 3 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-05 21:22 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

For source VM, live migration feature is enabled explicitly
when the guest is booting, for the incoming VM(s) it is implied.
This is required for handling A->B->C->... VM migrations case.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/kvm/svm/sev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6f69c3a47583..ba7c0ebfa1f3 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1592,6 +1592,13 @@ int svm_set_page_enc_bitmap(struct kvm *kvm,
 	if (ret)
 		goto unlock;
 
+	/*
+	 * For source VM, live migration feature is enabled
+	 * explicitly when the guest is booting, for the
+	 * incoming VM(s) it is implied.
+	 */
+	sev_update_migration_flags(kvm, KVM_SEV_LIVE_MIGRATION_ENABLED);
+
 	bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
 		    (gfn_end - gfn_start));
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command
  2020-05-05 21:14 ` [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
@ 2020-05-05 22:48   ` Venu Busireddy
  0 siblings, 0 replies; 59+ messages in thread
From: Venu Busireddy @ 2020-05-05 22:48 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, brijesh.singh

On 2020-05-05 21:14:54 +0000, Ashish Kalra wrote:
> From: Brijesh Singh <Brijesh.Singh@amd.com>
> 
> The command is used for encrypting the guest memory region using the encryption
> context created with KVM_SEV_SEND_START.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by : Steve Rutherford <srutherford@google.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>

Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>

> ---
>  .../virt/kvm/amd-memory-encryption.rst        |  24 ++++
>  arch/x86/kvm/svm/sev.c                        | 135 +++++++++++++++++-
>  include/uapi/linux/kvm.h                      |   9 ++
>  3 files changed, 164 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
> index 59cb59bd4675..d0dfa5b54e4f 100644
> --- a/Documentation/virt/kvm/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/amd-memory-encryption.rst
> @@ -290,6 +290,30 @@ Returns: 0 on success, -negative on error
>                  __u32 session_len;
>          };
>  
> +11. KVM_SEV_SEND_UPDATE_DATA
> +----------------------------
> +
> +The KVM_SEV_SEND_UPDATE_DATA command can be used by the hypervisor to encrypt the
> +outgoing guest memory region with the encryption context creating using
> +KVM_SEV_SEND_START.
> +
> +Parameters (in): struct kvm_sev_send_update_data
> +
> +Returns: 0 on success, -negative on error
> +
> +::
> +
> +        struct kvm_sev_launch_send_update_data {
> +                __u64 hdr_uaddr;        /* userspace address containing the packet header */
> +                __u32 hdr_len;
> +
> +                __u64 guest_uaddr;      /* the source memory region to be encrypted */
> +                __u32 guest_len;
> +
> +                __u64 trans_uaddr;      /* the destition memory region  */
> +                __u32 trans_len;
> +        };
> +
>  References
>  ==========
>  
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 5a15b43b4349..7031b660f64d 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -23,6 +23,7 @@ static DECLARE_RWSEM(sev_deactivate_lock);
>  static DEFINE_MUTEX(sev_bitmap_lock);
>  unsigned int max_sev_asid;
>  static unsigned int min_sev_asid;
> +static unsigned long sev_me_mask;
>  static unsigned long *sev_asid_bitmap;
>  static unsigned long *sev_reclaim_asid_bitmap;
>  #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
> @@ -1035,6 +1036,123 @@ static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	return ret;
>  }
>  
> +/* Userspace wants to query either header or trans length. */
> +static int
> +__sev_send_update_data_query_lengths(struct kvm *kvm, struct kvm_sev_cmd *argp,
> +				     struct kvm_sev_send_update_data *params)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_data_send_update_data *data;
> +	int ret;
> +
> +	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
> +	if (!data)
> +		return -ENOMEM;
> +
> +	data->handle = sev->handle;
> +	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, data, &argp->error);
> +
> +	params->hdr_len = data->hdr_len;
> +	params->trans_len = data->trans_len;
> +
> +	if (copy_to_user((void __user *)(uintptr_t)argp->data, params,
> +			 sizeof(struct kvm_sev_send_update_data)))
> +		ret = -EFAULT;
> +
> +	kfree(data);
> +	return ret;
> +}
> +
> +static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_data_send_update_data *data;
> +	struct kvm_sev_send_update_data params;
> +	void *hdr, *trans_data;
> +	struct page **guest_page;
> +	unsigned long n;
> +	int ret, offset;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> +			sizeof(struct kvm_sev_send_update_data)))
> +		return -EFAULT;
> +
> +	/* userspace wants to query either header or trans length */
> +	if (!params.trans_len || !params.hdr_len)
> +		return __sev_send_update_data_query_lengths(kvm, argp, &params);
> +
> +	if (!params.trans_uaddr || !params.guest_uaddr ||
> +	    !params.guest_len || !params.hdr_uaddr)
> +		return -EINVAL;
> +
> +	/* Check if we are crossing the page boundary */
> +	offset = params.guest_uaddr & (PAGE_SIZE - 1);
> +	if ((params.guest_len + offset > PAGE_SIZE))
> +		return -EINVAL;
> +
> +	/* Pin guest memory */
> +	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
> +				    PAGE_SIZE, &n, 0);
> +	if (!guest_page)
> +		return -EFAULT;
> +
> +	/* allocate memory for header and transport buffer */
> +	ret = -ENOMEM;
> +	hdr = kmalloc(params.hdr_len, GFP_KERNEL_ACCOUNT);
> +	if (!hdr)
> +		goto e_unpin;
> +
> +	trans_data = kmalloc(params.trans_len, GFP_KERNEL_ACCOUNT);
> +	if (!trans_data)
> +		goto e_free_hdr;
> +
> +	data = kzalloc(sizeof(*data), GFP_KERNEL);
> +	if (!data)
> +		goto e_free_trans_data;
> +
> +	data->hdr_address = __psp_pa(hdr);
> +	data->hdr_len = params.hdr_len;
> +	data->trans_address = __psp_pa(trans_data);
> +	data->trans_len = params.trans_len;
> +
> +	/* The SEND_UPDATE_DATA command requires C-bit to be always set. */
> +	data->guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) +
> +				offset;
> +	data->guest_address |= sev_me_mask;
> +	data->guest_len = params.guest_len;
> +	data->handle = sev->handle;
> +
> +	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, data, &argp->error);
> +
> +	if (ret)
> +		goto e_free;
> +
> +	/* copy transport buffer to user space */
> +	if (copy_to_user((void __user *)(uintptr_t)params.trans_uaddr,
> +			 trans_data, params.trans_len)) {
> +		ret = -EFAULT;
> +		goto e_free;
> +	}
> +
> +	/* Copy packet header to userspace. */
> +	ret = copy_to_user((void __user *)(uintptr_t)params.hdr_uaddr, hdr,
> +				params.hdr_len);
> +
> +e_free:
> +	kfree(data);
> +e_free_trans_data:
> +	kfree(trans_data);
> +e_free_hdr:
> +	kfree(hdr);
> +e_unpin:
> +	sev_unpin_memory(kvm, guest_page, n);
> +
> +	return ret;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>  	struct kvm_sev_cmd sev_cmd;
> @@ -1082,6 +1200,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  	case KVM_SEV_SEND_START:
>  		r = sev_send_start(kvm, &sev_cmd);
>  		break;
> +	case KVM_SEV_SEND_UPDATE_DATA:
> +		r = sev_send_update_data(kvm, &sev_cmd);
> +		break;
>  	default:
>  		r = -EINVAL;
>  		goto out;
> @@ -1238,16 +1359,22 @@ void sev_vm_destroy(struct kvm *kvm)
>  int __init sev_hardware_setup(void)
>  {
>  	struct sev_user_data_status *status;
> +	u32 eax, ebx;
>  	int rc;
>  
> -	/* Maximum number of encrypted guests supported simultaneously */
> -	max_sev_asid = cpuid_ecx(0x8000001F);
> +	/*
> +	 * Query the memory encryption information.
> +	 *  EBX:  Bit 0:5 Pagetable bit position used to indicate encryption
> +	 *  (aka Cbit).
> +	 *  ECX:  Maximum number of encrypted guests supported simultaneously.
> +	 *  EDX:  Minimum ASID value that should be used for SEV guest.
> +	 */
> +	cpuid(0x8000001f, &eax, &ebx, &max_sev_asid, &min_sev_asid);
>  
>  	if (!svm_sev_enabled())
>  		return 1;
>  
> -	/* Minimum ASID value that should be used for SEV guest */
> -	min_sev_asid = cpuid_edx(0x8000001F);
> +	sev_me_mask = 1UL << (ebx & 0x3f);
>  
>  	/* Initialize SEV ASID bitmaps */
>  	sev_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL);
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 8827d43e2684..7aaed8ee33cf 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1610,6 +1610,15 @@ struct kvm_sev_send_start {
>  	__u32 session_len;
>  };
>  
> +struct kvm_sev_send_update_data {
> +	__u64 hdr_uaddr;
> +	__u32 hdr_len;
> +	__u64 guest_uaddr;
> +	__u32 guest_len;
> +	__u64 trans_uaddr;
> +	__u32 trans_len;
> +};
> +
>  #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
>  #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
>  #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command
  2020-05-05 21:15 ` [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
@ 2020-05-05 22:51   ` Venu Busireddy
  0 siblings, 0 replies; 59+ messages in thread
From: Venu Busireddy @ 2020-05-05 22:51 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, brijesh.singh

On 2020-05-05 21:15:11 +0000, Ashish Kalra wrote:
> From: Brijesh Singh <Brijesh.Singh@amd.com>
> 
> The command is used to finailize the encryption context created with
> KVM_SEV_SEND_START command.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>

Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>

> ---
>  .../virt/kvm/amd-memory-encryption.rst        |  8 +++++++
>  arch/x86/kvm/svm/sev.c                        | 23 +++++++++++++++++++
>  2 files changed, 31 insertions(+)
> 
> diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
> index d0dfa5b54e4f..93884ec8918e 100644
> --- a/Documentation/virt/kvm/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/amd-memory-encryption.rst
> @@ -314,6 +314,14 @@ Returns: 0 on success, -negative on error
>                  __u32 trans_len;
>          };
>  
> +12. KVM_SEV_SEND_FINISH
> +------------------------
> +
> +After completion of the migration flow, the KVM_SEV_SEND_FINISH command can be
> +issued by the hypervisor to delete the encryption context.
> +
> +Returns: 0 on success, -negative on error
> +
>  References
>  ==========
>  
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 7031b660f64d..4d3031c9fdcf 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1153,6 +1153,26 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	return ret;
>  }
>  
> +static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_data_send_finish *data;
> +	int ret;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	data = kzalloc(sizeof(*data), GFP_KERNEL);
> +	if (!data)
> +		return -ENOMEM;
> +
> +	data->handle = sev->handle;
> +	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_FINISH, data, &argp->error);
> +
> +	kfree(data);
> +	return ret;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>  	struct kvm_sev_cmd sev_cmd;
> @@ -1203,6 +1223,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  	case KVM_SEV_SEND_UPDATE_DATA:
>  		r = sev_send_update_data(kvm, &sev_cmd);
>  		break;
> +	case KVM_SEV_SEND_FINISH:
> +		r = sev_send_finish(kvm, &sev_cmd);
> +		break;
>  	default:
>  		r = -EINVAL;
>  		goto out;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  2020-05-05 21:15 ` [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
@ 2020-05-05 22:52   ` Venu Busireddy
  0 siblings, 0 replies; 59+ messages in thread
From: Venu Busireddy @ 2020-05-05 22:52 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, brijesh.singh

On 2020-05-05 21:15:40 +0000, Ashish Kalra wrote:
> From: Brijesh Singh <Brijesh.Singh@amd.com>
> 
> The command is used to create the encryption context for an incoming
> SEV guest. The encryption context can be later used by the hypervisor
> to import the incoming data into the SEV guest memory space.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>

Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>

> ---
>  .../virt/kvm/amd-memory-encryption.rst        | 29 +++++++
>  arch/x86/kvm/svm/sev.c                        | 81 +++++++++++++++++++
>  include/uapi/linux/kvm.h                      |  9 +++
>  3 files changed, 119 insertions(+)
> 
> diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
> index 93884ec8918e..337bf6a8a3ee 100644
> --- a/Documentation/virt/kvm/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/amd-memory-encryption.rst
> @@ -322,6 +322,35 @@ issued by the hypervisor to delete the encryption context.
>  
>  Returns: 0 on success, -negative on error
>  
> +13. KVM_SEV_RECEIVE_START
> +------------------------
> +
> +The KVM_SEV_RECEIVE_START command is used for creating the memory encryption
> +context for an incoming SEV guest. To create the encryption context, the user must
> +provide a guest policy, the platform public Diffie-Hellman (PDH) key and session
> +information.
> +
> +Parameters: struct  kvm_sev_receive_start (in/out)
> +
> +Returns: 0 on success, -negative on error
> +
> +::
> +
> +        struct kvm_sev_receive_start {
> +                __u32 handle;           /* if zero then firmware creates a new handle */
> +                __u32 policy;           /* guest's policy */
> +
> +                __u64 pdh_uaddr;        /* userspace address pointing to the PDH key */
> +                __u32 pdh_len;
> +
> +                __u64 session_uaddr;    /* userspace address which points to the guest session information */
> +                __u32 session_len;
> +        };
> +
> +On success, the 'handle' field contains a new handle and on error, a negative value.
> +
> +For more details, see SEV spec Section 6.12.
> +
>  References
>  ==========
>  
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 4d3031c9fdcf..b575aa8e27af 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1173,6 +1173,84 @@ static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	return ret;
>  }
>  
> +static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_data_receive_start *start;
> +	struct kvm_sev_receive_start params;
> +	int *error = &argp->error;
> +	void *session_data;
> +	void *pdh_data;
> +	int ret;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	/* Get parameter from the userspace */
> +	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> +			sizeof(struct kvm_sev_receive_start)))
> +		return -EFAULT;
> +
> +	/* some sanity checks */
> +	if (!params.pdh_uaddr || !params.pdh_len ||
> +	    !params.session_uaddr || !params.session_len)
> +		return -EINVAL;
> +
> +	pdh_data = psp_copy_user_blob(params.pdh_uaddr, params.pdh_len);
> +	if (IS_ERR(pdh_data))
> +		return PTR_ERR(pdh_data);
> +
> +	session_data = psp_copy_user_blob(params.session_uaddr,
> +			params.session_len);
> +	if (IS_ERR(session_data)) {
> +		ret = PTR_ERR(session_data);
> +		goto e_free_pdh;
> +	}
> +
> +	ret = -ENOMEM;
> +	start = kzalloc(sizeof(*start), GFP_KERNEL);
> +	if (!start)
> +		goto e_free_session;
> +
> +	start->handle = params.handle;
> +	start->policy = params.policy;
> +	start->pdh_cert_address = __psp_pa(pdh_data);
> +	start->pdh_cert_len = params.pdh_len;
> +	start->session_address = __psp_pa(session_data);
> +	start->session_len = params.session_len;
> +
> +	/* create memory encryption context */
> +	ret = __sev_issue_cmd(argp->sev_fd, SEV_CMD_RECEIVE_START, start,
> +				error);
> +	if (ret)
> +		goto e_free;
> +
> +	/* Bind ASID to this guest */
> +	ret = sev_bind_asid(kvm, start->handle, error);
> +	if (ret)
> +		goto e_free;
> +
> +	params.handle = start->handle;
> +	if (copy_to_user((void __user *)(uintptr_t)argp->data,
> +			 &params, sizeof(struct kvm_sev_receive_start))) {
> +		ret = -EFAULT;
> +		sev_unbind_asid(kvm, start->handle);
> +		goto e_free;
> +	}
> +
> +	sev->handle = start->handle;
> +	sev->fd = argp->sev_fd;
> +
> +e_free:
> +	kfree(start);
> +e_free_session:
> +	kfree(session_data);
> +e_free_pdh:
> +	kfree(pdh_data);
> +
> +	return ret;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>  	struct kvm_sev_cmd sev_cmd;
> @@ -1226,6 +1304,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  	case KVM_SEV_SEND_FINISH:
>  		r = sev_send_finish(kvm, &sev_cmd);
>  		break;
> +	case KVM_SEV_RECEIVE_START:
> +		r = sev_receive_start(kvm, &sev_cmd);
> +		break;
>  	default:
>  		r = -EINVAL;
>  		goto out;
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 7aaed8ee33cf..24ac57151d53 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1619,6 +1619,15 @@ struct kvm_sev_send_update_data {
>  	__u32 trans_len;
>  };
>  
> +struct kvm_sev_receive_start {
> +	__u32 handle;
> +	__u32 policy;
> +	__u64 pdh_uaddr;
> +	__u32 pdh_len;
> +	__u64 session_uaddr;
> +	__u32 session_len;
> +};
> +
>  #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
>  #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
>  #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 00/18] Add AMD SEV guest live migration support
  2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
                   ` (17 preceding siblings ...)
  2020-05-05 21:22 ` [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s) Ashish Kalra
@ 2020-05-18 19:07 ` Ashish Kalra
  2020-06-01 20:02   ` Steve Rutherford
  18 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-05-18 19:07 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

Hello All,

Any other feedback, review or comments on this patch-set ?

Thanks,
Ashish

On Tue, May 05, 2020 at 09:13:49PM +0000, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> The series add support for AMD SEV guest live migration commands. To protect the
> confidentiality of an SEV protected guest memory while in transit we need to
> use the SEV commands defined in SEV API spec [1].
> 
> SEV guest VMs have the concept of private and shared memory. Private memory
> is encrypted with the guest-specific key, while shared memory may be encrypted
> with hypervisor key. The commands provided by the SEV FW are meant to be used
> for the private memory only. The patch series introduces a new hypercall.
> The guest OS can use this hypercall to notify the page encryption status.
> If the page is encrypted with guest specific-key then we use SEV command during
> the migration. If page is not encrypted then fallback to default.
> 
> The patch adds new ioctls KVM_{SET,GET}_PAGE_ENC_BITMAP. The ioctl can be used
> by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
> during the migration to know whether the page is encrypted.
> 
> This section descibes how the SEV live migration feature is negotiated
> between the host and guest, the host indicates this feature support via 
> KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> sets a UEFI enviroment variable indicating OVMF support for live
> migration, the guest kernel also detects the host support for this
> feature via cpuid and in case of an EFI boot verifies if OVMF also
> supports this feature by getting the UEFI enviroment variable and if it
> set then enables live migration feature on host by writing to a custom
> MSR, if not booted under EFI, then it simply enables the feature by
> again writing to the custom MSR. The host returns error as part of
> SET_PAGE_ENC_BITMAP ioctl if guest has not enabled live migration.
> 
> A branch containing these patches is available here:
> https://github.com/AMDESE/linux/tree/sev-migration-v8
> 
> [1] https://developer.amd.com/wp-content/resources/55766.PDF
> 
> Changes since v7:
> - Removed the hypervisor specific hypercall/paravirt callback for
>   SEV live migration and moved back to calling kvm_sev_hypercall3 
>   directly.
> - Fix build errors as
>   Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
>   build error when CONFIG_HYPERVISOR_GUEST=y and
>   CONFIG_AMD_MEM_ENCRYPT=n.
> - Implicitly enabled live migration for incoming VM(s) to handle 
>   A->B->C->... VM migrations.
> - Fixed Documentation as per comments on v6 patches.
> - Fixed error return path in sev_send_update_data() as per comments 
>   on v6 patches. 
> 
> Changes since v6:
> - Rebasing to mainline and refactoring to the new split SVM
>   infrastructre.
> - Move to static allocation of the unified Page Encryption bitmap
>   instead of the dynamic resizing of the bitmap, the static allocation
>   is done implicitly by extending kvm_arch_commit_memory_region() callack
>   to add svm specific x86_ops which can read the userspace provided memory
>   region/memslots and calculate the amount of guest RAM managed by the KVM
>   and grow the bitmap.
> - Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
>   of simply clearing specific bits.
> - Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
>   KVM_SET_PAGE_ENC_BITMAP.
> - Extended guest support for enabling Live Migration feature by adding a
>   check for UEFI environment variable indicating OVMF support for Live
>   Migration feature and additionally checking for KVM capability for the
>   same feature. If not booted under EFI, then we simply check for KVM
>   capability.
> - Add hypervisor specific hypercall for SEV live migration by adding
>   a new paravirt callback as part of x86_hyper_runtime.
>   (x86 hypervisor specific runtime callbacks)
> - Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code 
>   and adding check for SEV live migration enabled by guest in the 
>   KVM_GET_PAGE_ENC_BITMAP ioctl.
> - Instead of the complete __bss_decrypted section, only specific variables
>   such as hv_clock_boot and wall_clock are marked as decrypted in the
>   page encryption bitmap
> 
> Changes since v5:
> - Fix build errors as
>   Reported-by: kbuild test robot <lkp@intel.com>
> 
> Changes since v4:
> - Host support has been added to extend KVM capabilities/feature bits to 
>   include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
>   query for host-side support for SEV live migration and a new custom MSR
>   MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
>   migration feature.
> - Ensure that _bss_decrypted section is marked as decrypted in the
>   page encryption bitmap.
> - Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
>   as per the number of pages being requested by the user. Ensure that
>   we only copy bmap->num_pages bytes in the userspace buffer, if
>   bmap->num_pages is not byte aligned we read the trailing bits
>   from the userspace and copy those bits as is. This fixes guest
>   page(s) corruption issues observed after migration completion.
> - Add kexec support for SEV Live Migration to reset the host's
>   page encryption bitmap related to kernel specific page encryption
>   status settings before we load a new kernel by kexec. We cannot
>   reset the complete page encryption bitmap here as we need to
>   retain the UEFI/OVMF firmware specific settings.
> 
> Changes since v3:
> - Rebasing to mainline and testing.
> - Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the 
>   page encryption bitmap on a guest reboot event.
> - Adding a more reliable sanity check for GPA range being passed to
>   the hypercall to ensure that guest MMIO ranges are also marked
>   in the page encryption bitmap.
> 
> Changes since v2:
>  - reset the page encryption bitmap on vcpu reboot
> 
> Changes since v1:
>  - Add support to share the page encryption between the source and target
>    machine.
>  - Fix review feedbacks from Tom Lendacky.
>  - Add check to limit the session blob length.
>  - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
>    the memory slot when querying the bitmap.
> 
> Ashish Kalra (7):
>   KVM: SVM: Add support for static allocation of unified Page Encryption
>     Bitmap.
>   KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
>     Custom MSR.
>   EFI: Introduce the new AMD Memory Encryption GUID.
>   KVM: x86: Add guest support for detecting and enabling SEV Live
>     Migration feature.
>   KVM: x86: Mark _bss_decrypted section variables as decrypted in page
>     encryption bitmap.
>   KVM: x86: Add kexec support for SEV Live Migration.
>   KVM: SVM: Enable SEV live migration feature implicitly on Incoming
>     VM(s).
> 
> Brijesh Singh (11):
>   KVM: SVM: Add KVM_SEV SEND_START command
>   KVM: SVM: Add KVM_SEND_UPDATE_DATA command
>   KVM: SVM: Add KVM_SEV_SEND_FINISH command
>   KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
>   KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
>   KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
>   KVM: x86: Add AMD SEV specific Hypercall3
>   KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
>   KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
>   mm: x86: Invoke hypercall when page encryption status is changed
>   KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl
> 
>  .../virt/kvm/amd-memory-encryption.rst        | 120 +++
>  Documentation/virt/kvm/api.rst                |  71 ++
>  Documentation/virt/kvm/cpuid.rst              |   5 +
>  Documentation/virt/kvm/hypercalls.rst         |  15 +
>  Documentation/virt/kvm/msr.rst                |  10 +
>  arch/x86/include/asm/kvm_host.h               |   7 +
>  arch/x86/include/asm/kvm_para.h               |  12 +
>  arch/x86/include/asm/mem_encrypt.h            |  11 +
>  arch/x86/include/asm/paravirt.h               |  10 +
>  arch/x86/include/asm/paravirt_types.h         |   2 +
>  arch/x86/include/uapi/asm/kvm_para.h          |   5 +
>  arch/x86/kernel/kvm.c                         |  90 +++
>  arch/x86/kernel/kvmclock.c                    |  12 +
>  arch/x86/kernel/paravirt.c                    |   1 +
>  arch/x86/kvm/svm/sev.c                        | 732 +++++++++++++++++-
>  arch/x86/kvm/svm/svm.c                        |  21 +
>  arch/x86/kvm/svm/svm.h                        |   9 +
>  arch/x86/kvm/vmx/vmx.c                        |   1 +
>  arch/x86/kvm/x86.c                            |  35 +
>  arch/x86/mm/mem_encrypt.c                     |  68 +-
>  arch/x86/mm/pat/set_memory.c                  |   7 +
>  include/linux/efi.h                           |   1 +
>  include/linux/psp-sev.h                       |   8 +-
>  include/uapi/linux/kvm.h                      |  52 ++
>  include/uapi/linux/kvm_para.h                 |   1 +
>  25 files changed, 1297 insertions(+), 9 deletions(-)
> 
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  2020-05-05 21:17 ` [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
@ 2020-05-30  2:05   ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:05 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:17 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Brijesh Singh <Brijesh.Singh@amd.com>
>
> This hypercall is used by the SEV guest to notify a change in the page
> encryption status to the hypervisor. The hypercall should be invoked
> only when the encryption attribute is changed from encrypted -> decrypted
> and vice versa. By default all guest pages are considered encrypted.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  Documentation/virt/kvm/hypercalls.rst | 15 +++++
>  arch/x86/include/asm/kvm_host.h       |  2 +
>  arch/x86/kvm/svm/sev.c                | 90 +++++++++++++++++++++++++++
>  arch/x86/kvm/svm/svm.c                |  2 +
>  arch/x86/kvm/svm/svm.h                |  4 ++
>  arch/x86/kvm/vmx/vmx.c                |  1 +
>  arch/x86/kvm/x86.c                    |  6 ++
>  include/uapi/linux/kvm_para.h         |  1 +
>  8 files changed, 121 insertions(+)
>
> diff --git a/Documentation/virt/kvm/hypercalls.rst b/Documentation/virt/kvm/hypercalls.rst
> index dbaf207e560d..ff5287e68e81 100644
> --- a/Documentation/virt/kvm/hypercalls.rst
> +++ b/Documentation/virt/kvm/hypercalls.rst
> @@ -169,3 +169,18 @@ a0: destination APIC ID
>
>  :Usage example: When sending a call-function IPI-many to vCPUs, yield if
>                 any of the IPI target vCPUs was preempted.
> +
> +
> +8. KVM_HC_PAGE_ENC_STATUS
> +-------------------------
> +:Architecture: x86
> +:Status: active
> +:Purpose: Notify the encryption status changes in guest page table (SEV guest)
> +
> +a0: the guest physical address of the start page
> +a1: the number of pages
> +a2: encryption attribute
> +
> +   Where:
> +       * 1: Encryption attribute is set
> +       * 0: Encryption attribute is cleared
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 42a2d0d3984a..4a8ee22f4f5b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1254,6 +1254,8 @@ struct kvm_x86_ops {
>
>         bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
>         int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
> +       int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
> +                                 unsigned long sz, unsigned long mode);
>  };
>
>  struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 698704defbcd..f088467708f0 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1347,6 +1347,93 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
>         return ret;
>  }
>
> +static int sev_resize_page_enc_bitmap(struct kvm *kvm, unsigned long new_size)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       unsigned long *map;
> +       unsigned long sz;
> +
> +       if (sev->page_enc_bmap_size >= new_size)
> +               return 0;
> +
> +       sz = ALIGN(new_size, BITS_PER_LONG) / 8;
> +
> +       map = vmalloc(sz);
> +       if (!map) {
> +               pr_err_once("Failed to allocate encrypted bitmap size %lx\n",
> +                               sz);
> +               return -ENOMEM;
> +       }
> +
> +       /* mark the page encrypted (by default) */
> +       memset(map, 0xff, sz);
> +
> +       bitmap_copy(map, sev->page_enc_bmap, sev->page_enc_bmap_size);
> +       kvfree(sev->page_enc_bmap);
> +
> +       sev->page_enc_bmap = map;
> +       sev->page_enc_bmap_size = new_size;
> +
> +       return 0;
> +}
> +
> +int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
> +                                 unsigned long npages, unsigned long enc)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       kvm_pfn_t pfn_start, pfn_end;
> +       gfn_t gfn_start, gfn_end;
> +
> +       if (!sev_guest(kvm))
> +               return -EINVAL;
> +
> +       if (!npages)
> +               return 0;
> +
> +       gfn_start = gpa_to_gfn(gpa);
> +       gfn_end = gfn_start + npages;
> +
> +       /* out of bound access error check */
> +       if (gfn_end <= gfn_start)
> +               return -EINVAL;
> +
> +       /* lets make sure that gpa exist in our memslot */
> +       pfn_start = gfn_to_pfn(kvm, gfn_start);
> +       pfn_end = gfn_to_pfn(kvm, gfn_end);
> +
> +       if (is_error_noslot_pfn(pfn_start) && !is_noslot_pfn(pfn_start)) {
> +               /*
> +                * Allow guest MMIO range(s) to be added
> +                * to the page encryption bitmap.
> +                */
> +               return -EINVAL;
> +       }
> +
> +       if (is_error_noslot_pfn(pfn_end) && !is_noslot_pfn(pfn_end)) {
> +               /*
> +                * Allow guest MMIO range(s) to be added
> +                * to the page encryption bitmap.
> +                */
> +               return -EINVAL;
> +       }
> +
> +       mutex_lock(&kvm->lock);
> +
> +       if (sev->page_enc_bmap_size < gfn_end)
> +               goto unlock;
> +
> +       if (enc)
> +               __bitmap_set(sev->page_enc_bmap, gfn_start,
> +                               gfn_end - gfn_start);
> +       else
> +               __bitmap_clear(sev->page_enc_bmap, gfn_start,
> +                               gfn_end - gfn_start);
> +
> +unlock:
> +       mutex_unlock(&kvm->lock);
> +       return 0;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>         struct kvm_sev_cmd sev_cmd;
> @@ -1560,6 +1647,9 @@ void sev_vm_destroy(struct kvm *kvm)
>
>         sev_unbind_asid(kvm, sev->handle);
>         sev_asid_free(sev->asid);
> +
> +       kvfree(sev->page_enc_bmap);
> +       sev->page_enc_bmap = NULL;
>  }
>
>  int __init sev_hardware_setup(void)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 2f379bacbb26..1013ef0f4ce2 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4014,6 +4014,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>         .apic_init_signal_blocked = svm_apic_init_signal_blocked,
>
>         .check_nested_events = svm_check_nested_events,
> +
> +       .page_enc_status_hc = svm_page_enc_status_hc,
>  };
>
>  static struct kvm_x86_init_ops svm_init_ops __initdata = {
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index df3474f4fb02..6a562f5928a2 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -65,6 +65,8 @@ struct kvm_sev_info {
>         int fd;                 /* SEV device fd */
>         unsigned long pages_locked; /* Number of pages locked */
>         struct list_head regions_list;  /* List of registered regions */
> +       unsigned long *page_enc_bmap;
> +       unsigned long page_enc_bmap_size;
>  };
>
>  struct kvm_svm {
> @@ -400,6 +402,8 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
>                                bool has_error_code, u32 error_code);
>  int svm_check_nested_events(struct kvm_vcpu *vcpu);
>  int nested_svm_exit_special(struct vcpu_svm *svm);
> +int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
> +                                 unsigned long npages, unsigned long enc);
>
>  /* avic.c */
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index c2c6335a998c..7d01d3aa6461 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7838,6 +7838,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
>         .nested_get_evmcs_version = NULL,
>         .need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
>         .apic_init_signal_blocked = vmx_apic_init_signal_blocked,
> +       .page_enc_status_hc = NULL,
>  };
>
>  static __init int hardware_setup(void)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c5835f9cb9ad..5f5ddb5765e2 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7605,6 +7605,12 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>                 kvm_sched_yield(vcpu->kvm, a0);
>                 ret = 0;
>                 break;
> +       case KVM_HC_PAGE_ENC_STATUS:
> +               ret = -KVM_ENOSYS;
> +               if (kvm_x86_ops.page_enc_status_hc)
> +                       ret = kvm_x86_ops.page_enc_status_hc(vcpu->kvm,
> +                                       a0, a1, a2);
> +               break;
>         default:
>                 ret = -KVM_ENOSYS;
>                 break;
> diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
> index 8b86609849b9..847b83b75dc8 100644
> --- a/include/uapi/linux/kvm_para.h
> +++ b/include/uapi/linux/kvm_para.h
> @@ -29,6 +29,7 @@
>  #define KVM_HC_CLOCK_PAIRING           9
>  #define KVM_HC_SEND_IPI                10
>  #define KVM_HC_SCHED_YIELD             11
> +#define KVM_HC_PAGE_ENC_STATUS         12
>
>  /*
>   * hypercalls use architecture specific
> --
> 2.17.1
>


Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
  2020-05-05 21:17 ` [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl Ashish Kalra
@ 2020-05-30  2:05   ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:05 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:17 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Brijesh Singh <Brijesh.Singh@amd.com>
>
> The ioctl can be used to retrieve page encryption bitmap for a given
> gfn range.
>
> Return the correct bitmap as per the number of pages being requested
> by the user. Ensure that we only copy bmap->num_pages bytes in the
> userspace buffer, if bmap->num_pages is not byte aligned we read
> the trailing bits from the userspace and copy those bits as is.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  Documentation/virt/kvm/api.rst  | 27 +++++++++++++
>  arch/x86/include/asm/kvm_host.h |  2 +
>  arch/x86/kvm/svm/sev.c          | 70 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/svm/svm.c          |  1 +
>  arch/x86/kvm/svm/svm.h          |  1 +
>  arch/x86/kvm/x86.c              | 12 ++++++
>  include/uapi/linux/kvm.h        | 12 ++++++
>  7 files changed, 125 insertions(+)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index efbbe570aa9b..ecad84086892 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -4636,6 +4636,33 @@ This ioctl resets VCPU registers and control structures according to
>  the clear cpu reset definition in the POP. However, the cpu is not put
>  into ESA mode. This reset is a superset of the initial reset.
>
> +4.125 KVM_GET_PAGE_ENC_BITMAP (vm ioctl)
> +---------------------------------------
> +
> +:Capability: basic
> +:Architectures: x86
> +:Type: vm ioctl
> +:Parameters: struct kvm_page_enc_bitmap (in/out)
> +:Returns: 0 on success, -1 on error
> +
> +/* for KVM_GET_PAGE_ENC_BITMAP */
> +struct kvm_page_enc_bitmap {
> +       __u64 start_gfn;
> +       __u64 num_pages;
> +       union {
> +               void __user *enc_bitmap; /* one bit per page */
> +               __u64 padding2;
> +       };
> +};
> +
> +The encrypted VMs have the concept of private and shared pages. The private
> +pages are encrypted with the guest-specific key, while the shared pages may
> +be encrypted with the hypervisor key. The KVM_GET_PAGE_ENC_BITMAP can
> +be used to get the bitmap indicating whether the guest page is private
> +or shared. The bitmap can be used during the guest migration. If the page
> +is private then the userspace need to use SEV migration commands to transmit
> +the page.
> +
>
>  4.125 KVM_S390_PV_COMMAND
>  -------------------------
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4a8ee22f4f5b..9e428befb6a4 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1256,6 +1256,8 @@ struct kvm_x86_ops {
>         int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
>         int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long sz, unsigned long mode);
> +       int (*get_page_enc_bitmap)(struct kvm *kvm,
> +                               struct kvm_page_enc_bitmap *bmap);
>  };
>
>  struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index f088467708f0..387045902470 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1434,6 +1434,76 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>         return 0;
>  }
>
> +int svm_get_page_enc_bitmap(struct kvm *kvm,
> +                                  struct kvm_page_enc_bitmap *bmap)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       unsigned long gfn_start, gfn_end;
> +       unsigned long sz, i, sz_bytes;
> +       unsigned long *bitmap;
> +       int ret, n;
> +
> +       if (!sev_guest(kvm))
> +               return -ENOTTY;
> +
> +       gfn_start = bmap->start_gfn;
> +       gfn_end = gfn_start + bmap->num_pages;
> +
> +       sz = ALIGN(bmap->num_pages, BITS_PER_LONG) / BITS_PER_BYTE;
> +       bitmap = kmalloc(sz, GFP_KERNEL);
> +       if (!bitmap)
> +               return -ENOMEM;
> +
> +       /* by default all pages are marked encrypted */
> +       memset(bitmap, 0xff, sz);
> +
> +       mutex_lock(&kvm->lock);
> +       if (sev->page_enc_bmap) {
> +               i = gfn_start;
> +               for_each_clear_bit_from(i, sev->page_enc_bmap,
> +                                     min(sev->page_enc_bmap_size, gfn_end))
gfn_end is not a size? I believe you want either gfn_end - gfn_start
or bmap->num_pages.

> +                       clear_bit(i - gfn_start, bitmap);
> +       }
> +       mutex_unlock(&kvm->lock);
> +
> +       ret = -EFAULT;
> +
> +       n = bmap->num_pages % BITS_PER_BYTE;
> +       sz_bytes = ALIGN(bmap->num_pages, BITS_PER_BYTE) / BITS_PER_BYTE;
> +
> +       /*
> +        * Return the correct bitmap as per the number of pages being
> +        * requested by the user. Ensure that we only copy bmap->num_pages
> +        * bytes in the userspace buffer, if bmap->num_pages is not byte
> +        * aligned we read the trailing bits from the userspace and copy
Nit: "userspace" instead of "the userspace".



> +        * those bits as is.
> +        */
> +
> +       if (n) {
> +               unsigned char *bitmap_kernel = (unsigned char *)bitmap;
> +               unsigned char bitmap_user;
> +               unsigned long offset, mask;
> +
> +               offset = bmap->num_pages / BITS_PER_BYTE;
> +               if (copy_from_user(&bitmap_user, bmap->enc_bitmap + offset,
> +                               sizeof(unsigned char)))
> +                       goto out;
> +
> +               mask = GENMASK(n - 1, 0);
> +               bitmap_user &= ~mask;
> +               bitmap_kernel[offset] &= mask;
> +               bitmap_kernel[offset] |= bitmap_user;
> +       }
> +
> +       if (copy_to_user(bmap->enc_bitmap, bitmap, sz_bytes))
> +               goto out;
> +
> +       ret = 0;
> +out:
> +       kfree(bitmap);
> +       return ret;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>         struct kvm_sev_cmd sev_cmd;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 1013ef0f4ce2..588709a9f68e 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4016,6 +4016,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>         .check_nested_events = svm_check_nested_events,
>
>         .page_enc_status_hc = svm_page_enc_status_hc,
> +       .get_page_enc_bitmap = svm_get_page_enc_bitmap,
>  };
>
>  static struct kvm_x86_init_ops svm_init_ops __initdata = {
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 6a562f5928a2..f087fa7b380c 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -404,6 +404,7 @@ int svm_check_nested_events(struct kvm_vcpu *vcpu);
>  int nested_svm_exit_special(struct vcpu_svm *svm);
>  int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long npages, unsigned long enc);
> +int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
>
>  /* avic.c */
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5f5ddb5765e2..937797cfaf9a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5208,6 +5208,18 @@ long kvm_arch_vm_ioctl(struct file *filp,
>         case KVM_SET_PMU_EVENT_FILTER:
>                 r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
>                 break;
> +       case KVM_GET_PAGE_ENC_BITMAP: {
> +               struct kvm_page_enc_bitmap bitmap;
> +
> +               r = -EFAULT;
> +               if (copy_from_user(&bitmap, argp, sizeof(bitmap)))
> +                       goto out;
> +
> +               r = -ENOTTY;
> +               if (kvm_x86_ops.get_page_enc_bitmap)
> +                       r = kvm_x86_ops.get_page_enc_bitmap(kvm, &bitmap);
> +               break;
> +       }
>         default:
>                 r = -ENOTTY;
>         }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 0fe1d206d750..af62f2afaa5d 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -505,6 +505,16 @@ struct kvm_dirty_log {
>         };
>  };
>
> +/* for KVM_GET_PAGE_ENC_BITMAP */
> +struct kvm_page_enc_bitmap {
> +       __u64 start_gfn;
> +       __u64 num_pages;
> +       union {
> +               void __user *enc_bitmap; /* one bit per page */
> +               __u64 padding2;
> +       };
> +};
> +
>  /* for KVM_CLEAR_DIRTY_LOG */
>  struct kvm_clear_dirty_log {
>         __u32 slot;
> @@ -1518,6 +1528,8 @@ struct kvm_pv_cmd {
>  /* Available with KVM_CAP_S390_PROTECTED */
>  #define KVM_S390_PV_COMMAND            _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
>
> +#define KVM_GET_PAGE_ENC_BITMAP        _IOW(KVMIO, 0xc6, struct kvm_page_enc_bitmap)
> +
>  /* Secure Encrypted Virtualization command */
>  enum sev_cmd_id {
>         /* Guest initialization commands */
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed
  2020-05-05 21:17 ` [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
@ 2020-05-30  2:06   ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:06 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:18 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Brijesh Singh <Brijesh.Singh@amd.com>
>
> Invoke a hypercall when a memory region is changed from encrypted ->
> decrypted and vice versa. Hypervisor needs to know the page encryption
> status during the guest migration.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/include/asm/paravirt.h       | 10 +++++
>  arch/x86/include/asm/paravirt_types.h |  2 +
>  arch/x86/kernel/paravirt.c            |  1 +
>  arch/x86/mm/mem_encrypt.c             | 57 ++++++++++++++++++++++++++-
>  arch/x86/mm/pat/set_memory.c          |  7 ++++
>  5 files changed, 76 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 694d8daf4983..8127b9c141bf 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -78,6 +78,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
>         PVOP_VCALL1(mmu.exit_mmap, mm);
>  }
>
> +static inline void page_encryption_changed(unsigned long vaddr, int npages,
> +                                               bool enc)
> +{
> +       PVOP_VCALL3(mmu.page_encryption_changed, vaddr, npages, enc);
> +}
> +
>  #ifdef CONFIG_PARAVIRT_XXL
>  static inline void load_sp0(unsigned long sp0)
>  {
> @@ -946,6 +952,10 @@ static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
>  static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
>  {
>  }
> +
> +static inline void page_encryption_changed(unsigned long vaddr, int npages, bool enc)
> +{
> +}
>  #endif
>  #endif /* __ASSEMBLY__ */
>  #endif /* _ASM_X86_PARAVIRT_H */
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 732f62e04ddb..03bfd515c59c 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -215,6 +215,8 @@ struct pv_mmu_ops {
>
>         /* Hook for intercepting the destruction of an mm_struct. */
>         void (*exit_mmap)(struct mm_struct *mm);
> +       void (*page_encryption_changed)(unsigned long vaddr, int npages,
> +                                       bool enc);
>
>  #ifdef CONFIG_PARAVIRT_XXL
>         struct paravirt_callee_save read_cr2;
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index c131ba4e70ef..840c02b23aeb 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -367,6 +367,7 @@ struct paravirt_patch_template pv_ops = {
>                         (void (*)(struct mmu_gather *, void *))tlb_remove_page,
>
>         .mmu.exit_mmap          = paravirt_nop,
> +       .mmu.page_encryption_changed    = paravirt_nop,
>
>  #ifdef CONFIG_PARAVIRT_XXL
>         .mmu.read_cr2           = __PV_IS_CALLEE_SAVE(native_read_cr2),
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index f4bd4b431ba1..c9800fa811f6 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -19,6 +19,7 @@
>  #include <linux/kernel.h>
>  #include <linux/bitops.h>
>  #include <linux/dma-mapping.h>
> +#include <linux/kvm_para.h>
>
>  #include <asm/tlbflush.h>
>  #include <asm/fixmap.h>
> @@ -29,6 +30,7 @@
>  #include <asm/processor-flags.h>
>  #include <asm/msr.h>
>  #include <asm/cmdline.h>
> +#include <asm/kvm_para.h>
>
>  #include "mm_internal.h"
>
> @@ -196,6 +198,47 @@ void __init sme_early_init(void)
>                 swiotlb_force = SWIOTLB_FORCE;
>  }
>
> +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
> +                                       bool enc)
> +{
> +       unsigned long sz = npages << PAGE_SHIFT;
> +       unsigned long vaddr_end, vaddr_next;
> +
> +       vaddr_end = vaddr + sz;
> +
> +       for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> +               int psize, pmask, level;
> +               unsigned long pfn;
> +               pte_t *kpte;
> +
> +               kpte = lookup_address(vaddr, &level);
> +               if (!kpte || pte_none(*kpte))
> +                       return;
> +
> +               switch (level) {
> +               case PG_LEVEL_4K:
> +                       pfn = pte_pfn(*kpte);
> +                       break;
> +               case PG_LEVEL_2M:
> +                       pfn = pmd_pfn(*(pmd_t *)kpte);
> +                       break;
> +               case PG_LEVEL_1G:
> +                       pfn = pud_pfn(*(pud_t *)kpte);
> +                       break;
> +               default:
> +                       return;
> +               }
> +
> +               psize = page_level_size(level);
> +               pmask = page_level_mask(level);
> +
> +               kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
> +                                  pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
> +
> +               vaddr_next = (vaddr & pmask) + psize;
> +       }
> +}
> +
>  static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>  {
>         pgprot_t old_prot, new_prot;
> @@ -253,12 +296,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>  static int __init early_set_memory_enc_dec(unsigned long vaddr,
>                                            unsigned long size, bool enc)
>  {
> -       unsigned long vaddr_end, vaddr_next;
> +       unsigned long vaddr_end, vaddr_next, start;
>         unsigned long psize, pmask;
>         int split_page_size_mask;
>         int level, ret;
>         pte_t *kpte;
>
> +       start = vaddr;
>         vaddr_next = vaddr;
>         vaddr_end = vaddr + size;
>
> @@ -313,6 +357,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
>
>         ret = 0;
>
> +       set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
> +                                       enc);
>  out:
>         __flush_tlb_all();
>         return ret;
> @@ -451,6 +497,15 @@ void __init mem_encrypt_init(void)
>         if (sev_active())
>                 static_branch_enable(&sev_enable_key);
>
> +#ifdef CONFIG_PARAVIRT
> +       /*
> +        * With SEV, we need to make a hypercall when page encryption state is
> +        * changed.
> +        */
> +       if (sev_active())
> +               pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
> +#endif
> +
>         pr_info("AMD %s active\n",
>                 sev_active() ? "Secure Encrypted Virtualization (SEV)"
>                              : "Secure Memory Encryption (SME)");
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 59eca6a94ce7..9aaf1b6f5a1b 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -27,6 +27,7 @@
>  #include <asm/proto.h>
>  #include <asm/memtype.h>
>  #include <asm/set_memory.h>
> +#include <asm/paravirt.h>
>
>  #include "../mm_internal.h"
>
> @@ -2003,6 +2004,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>          */
>         cpa_flush(&cpa, 0);
>
> +       /* Notify hypervisor that a given memory range is mapped encrypted
> +        * or decrypted. The hypervisor will use this information during the
> +        * VM migration.
> +        */
> +       page_encryption_changed(addr, numpages, enc);
> +
>         return ret;
>  }
>
> --
> 2.17.1
>


Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl
  2020-05-05 21:18 ` [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl Ashish Kalra
@ 2020-05-30  2:06   ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:06 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:18 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Brijesh Singh <Brijesh.Singh@amd.com>
>
> The ioctl can be used to set page encryption bitmap for an
> incoming guest.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  Documentation/virt/kvm/api.rst  | 44 +++++++++++++++++++++++++++++
>  arch/x86/include/asm/kvm_host.h |  2 ++
>  arch/x86/kvm/svm/sev.c          | 50 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/svm/svm.c          |  1 +
>  arch/x86/kvm/svm/svm.h          |  1 +
>  arch/x86/kvm/x86.c              | 12 ++++++++
>  include/uapi/linux/kvm.h        |  1 +
>  7 files changed, 111 insertions(+)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index ecad84086892..fa70017ee693 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -4663,6 +4663,28 @@ or shared. The bitmap can be used during the guest migration. If the page
>  is private then the userspace need to use SEV migration commands to transmit
>  the page.
>
> +4.126 KVM_SET_PAGE_ENC_BITMAP (vm ioctl)
> +---------------------------------------
> +
> +:Capability: basic
> +:Architectures: x86
> +:Type: vm ioctl
> +:Parameters: struct kvm_page_enc_bitmap (in/out)
> +:Returns: 0 on success, -1 on error
> +
> +/* for KVM_SET_PAGE_ENC_BITMAP */
> +struct kvm_page_enc_bitmap {
> +       __u64 start_gfn;
> +       __u64 num_pages;
> +       union {
> +               void __user *enc_bitmap; /* one bit per page */
> +               __u64 padding2;
> +       };
> +};
> +
> +During the guest live migration the outgoing guest exports its page encryption
> +bitmap, the KVM_SET_PAGE_ENC_BITMAP can be used to build the page encryption
> +bitmap for an incoming guest.
>
>  4.125 KVM_S390_PV_COMMAND
>  -------------------------
> @@ -4717,6 +4739,28 @@ KVM_PV_VM_VERIFY
>    Verify the integrity of the unpacked image. Only if this succeeds,
>    KVM is allowed to start protected VCPUs.
>
> +4.126 KVM_SET_PAGE_ENC_BITMAP (vm ioctl)
> +---------------------------------------
> +
> +:Capability: basic
> +:Architectures: x86
> +:Type: vm ioctl
> +:Parameters: struct kvm_page_enc_bitmap (in/out)
> +:Returns: 0 on success, -1 on error
> +
> +/* for KVM_SET_PAGE_ENC_BITMAP */
> +struct kvm_page_enc_bitmap {
> +       __u64 start_gfn;
> +       __u64 num_pages;
> +       union {
> +               void __user *enc_bitmap; /* one bit per page */
> +               __u64 padding2;
> +       };
> +};
> +
> +During the guest live migration the outgoing guest exports its page encryption
> +bitmap, the KVM_SET_PAGE_ENC_BITMAP can be used to build the page encryption
> +bitmap for an incoming guest.
>
>  5. The kvm_run structure
>  ========================
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 9e428befb6a4..fc74144d5ab0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1258,6 +1258,8 @@ struct kvm_x86_ops {
>                                   unsigned long sz, unsigned long mode);
>         int (*get_page_enc_bitmap)(struct kvm *kvm,
>                                 struct kvm_page_enc_bitmap *bmap);
> +       int (*set_page_enc_bitmap)(struct kvm *kvm,
> +                               struct kvm_page_enc_bitmap *bmap);
>  };
>
>  struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 387045902470..30efc1068707 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1504,6 +1504,56 @@ int svm_get_page_enc_bitmap(struct kvm *kvm,
>         return ret;
>  }
>
> +int svm_set_page_enc_bitmap(struct kvm *kvm,
> +                                  struct kvm_page_enc_bitmap *bmap)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       unsigned long gfn_start, gfn_end;
> +       unsigned long *bitmap;
> +       unsigned long sz;
> +       int ret;
> +
> +       if (!sev_guest(kvm))
> +               return -ENOTTY;
> +       /* special case of resetting the complete bitmap */
> +       if (!bmap->enc_bitmap) {
> +               mutex_lock(&kvm->lock);
> +               /* by default all pages are marked encrypted */
> +               if (sev->page_enc_bmap_size)
> +                       bitmap_fill(sev->page_enc_bmap,
> +                                   sev->page_enc_bmap_size);
> +               mutex_unlock(&kvm->lock);
> +               return 0;
> +       }
>
> +
> +       gfn_start = bmap->start_gfn;
> +       gfn_end = gfn_start + bmap->num_pages;
> +
> +       sz = ALIGN(bmap->num_pages, BITS_PER_LONG) / 8;
> +       bitmap = kmalloc(sz, GFP_KERNEL);
> +       if (!bitmap)
> +               return -ENOMEM;
> +
> +       ret = -EFAULT;
> +       if (copy_from_user(bitmap, bmap->enc_bitmap, sz))
> +               goto out;
> +
> +       mutex_lock(&kvm->lock);
> +       ret = sev_resize_page_enc_bitmap(kvm, gfn_end);
> +       if (ret)
> +               goto unlock;
> +
> +       bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
> +                   (gfn_end - gfn_start));

I *think* this assumes that gfn_start is a multiple of 8. I'm not
certain I have a clean suggestion for fixing this, other than
advertising that this is an expectation, and returning an error if
that is not true.

If I'm reading bitmap_copy correctly, I also think it assumes all
bitmaps have lengths that are unsigned long aligned, which surprised
me.
>
> +
> +       ret = 0;
> +unlock:
> +       mutex_unlock(&kvm->lock);
> +out:
> +       kfree(bitmap);
> +       return ret;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>         struct kvm_sev_cmd sev_cmd;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 588709a9f68e..501e82f5593c 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4017,6 +4017,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>
>         .page_enc_status_hc = svm_page_enc_status_hc,
>         .get_page_enc_bitmap = svm_get_page_enc_bitmap,
> +       .set_page_enc_bitmap = svm_set_page_enc_bitmap,
>  };
>
>  static struct kvm_x86_init_ops svm_init_ops __initdata = {
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index f087fa7b380c..2ebdcce50312 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -405,6 +405,7 @@ int nested_svm_exit_special(struct vcpu_svm *svm);
>  int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long npages, unsigned long enc);
>  int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
> +int svm_set_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
>
>  /* avic.c */
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 937797cfaf9a..c4166d7a0493 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5220,6 +5220,18 @@ long kvm_arch_vm_ioctl(struct file *filp,
>                         r = kvm_x86_ops.get_page_enc_bitmap(kvm, &bitmap);
>                 break;
>         }
> +       case KVM_SET_PAGE_ENC_BITMAP: {
> +               struct kvm_page_enc_bitmap bitmap;
> +
> +               r = -EFAULT;
> +               if (copy_from_user(&bitmap, argp, sizeof(bitmap)))
> +                       goto out;
> +
> +               r = -ENOTTY;
> +               if (kvm_x86_ops.set_page_enc_bitmap)
> +                       r = kvm_x86_ops.set_page_enc_bitmap(kvm, &bitmap);
> +               break;
> +       }
>         default:
>                 r = -ENOTTY;
>         }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index af62f2afaa5d..2798b17484d0 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1529,6 +1529,7 @@ struct kvm_pv_cmd {
>  #define KVM_S390_PV_COMMAND            _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
>
>  #define KVM_GET_PAGE_ENC_BITMAP        _IOW(KVMIO, 0xc6, struct kvm_page_enc_bitmap)
> +#define KVM_SET_PAGE_ENC_BITMAP        _IOW(KVMIO, 0xc7, struct kvm_page_enc_bitmap)
>
>  /* Secure Encrypted Virtualization command */
>  enum sev_cmd_id {
> --
> 2.17.1
>

Otherwise, this looks good to me. Thanks for merging the ioctls together.

Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap.
  2020-05-05 21:18 ` [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap Ashish Kalra
@ 2020-05-30  2:07   ` Steve Rutherford
  2020-05-30  5:49     ` Ashish Kalra
  2020-12-04 11:08   ` Paolo Bonzini
  1 sibling, 1 reply; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:07 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:18 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> Add support for static allocation of the unified Page encryption bitmap by
> extending kvm_arch_commit_memory_region() callack to add svm specific x86_ops
> which can read the userspace provided memory region/memslots and calculate
> the amount of guest RAM managed by the KVM and grow the bitmap based
> on that information, i.e. the highest guest PA that is mapped by a memslot.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/svm/sev.c          | 35 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/svm/svm.c          |  1 +
>  arch/x86/kvm/svm/svm.h          |  1 +
>  arch/x86/kvm/x86.c              |  5 +++++
>  5 files changed, 43 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index fc74144d5ab0..b573ea85b57e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1254,6 +1254,7 @@ struct kvm_x86_ops {
>
>         bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
>         int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
> +       void (*commit_memory_region)(struct kvm *kvm, enum kvm_mr_change change);
>         int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long sz, unsigned long mode);
>         int (*get_page_enc_bitmap)(struct kvm *kvm,
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 30efc1068707..c0d7043a0627 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1377,6 +1377,41 @@ static int sev_resize_page_enc_bitmap(struct kvm *kvm, unsigned long new_size)
>         return 0;
>  }
>
> +void svm_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change)
> +{
> +       struct kvm_memslots *slots;
> +       struct kvm_memory_slot *memslot;
> +       gfn_t start, end = 0;
> +
> +       spin_lock(&kvm->mmu_lock);
> +       if (change == KVM_MR_CREATE) {
> +               slots = kvm_memslots(kvm);
> +               kvm_for_each_memslot(memslot, slots) {
> +                       start = memslot->base_gfn;
> +                       end = memslot->base_gfn + memslot->npages;
> +                       /*
> +                        * KVM memslots is a sorted list, starting with
> +                        * the highest mapped guest PA, so pick the topmost
> +                        * valid guest PA.
> +                        */
> +                       if (memslot->npages)
> +                               break;
> +               }
> +       }
> +       spin_unlock(&kvm->mmu_lock);
> +
> +       if (end) {
> +               /*
> +                * NORE: This callback is invoked in vm ioctl
> +                * set_user_memory_region, hence we can use a
> +                * mutex here.
> +                */
> +               mutex_lock(&kvm->lock);
> +               sev_resize_page_enc_bitmap(kvm, end);
> +               mutex_unlock(&kvm->lock);
> +       }
> +}
> +
>  int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long npages, unsigned long enc)
>  {
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 501e82f5593c..442adbbb0641 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4015,6 +4015,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>
>         .check_nested_events = svm_check_nested_events,
>
> +       .commit_memory_region = svm_commit_memory_region,
>         .page_enc_status_hc = svm_page_enc_status_hc,
>         .get_page_enc_bitmap = svm_get_page_enc_bitmap,
>         .set_page_enc_bitmap = svm_set_page_enc_bitmap,
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 2ebdcce50312..fd99e0a5417a 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -406,6 +406,7 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long npages, unsigned long enc);
>  int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
>  int svm_set_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
> +void svm_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change);
>
>  /* avic.c */
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c4166d7a0493..8938de868d42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10133,6 +10133,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>                 kvm_mmu_change_mmu_pages(kvm,
>                                 kvm_mmu_calculate_default_mmu_pages(kvm));
>
> +       if (change == KVM_MR_CREATE || change == KVM_MR_DELETE) {
> +               if (kvm_x86_ops.commit_memory_region)
> +                       kvm_x86_ops.commit_memory_region(kvm, change);
Why not just call this every time (if it exists) and have the
kvm_x86_op determine if it should do anything?

It seems like it's a nop anyway unless you are doing a create.

> +       }
> +
>         /*
>          * Dirty logging tracks sptes in 4k granularity, meaning that large
>          * sptes have to be split.  If live migration is successful, the guest
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-05-05 21:19 ` [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
@ 2020-05-30  2:07   ` Steve Rutherford
  2020-12-04 11:20   ` Paolo Bonzini
  1 sibling, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:07 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:19 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> for host-side support for SEV live migration. Also add a new custom
> MSR_KVM_SEV_LIVE_MIG_EN for guest to enable the SEV live migration
> feature.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  Documentation/virt/kvm/cpuid.rst     |  5 +++++
>  Documentation/virt/kvm/msr.rst       | 10 ++++++++++
>  arch/x86/include/uapi/asm/kvm_para.h |  5 +++++
>  arch/x86/kvm/svm/sev.c               | 14 ++++++++++++++
>  arch/x86/kvm/svm/svm.c               | 16 ++++++++++++++++
>  arch/x86/kvm/svm/svm.h               |  2 ++
>  6 files changed, 52 insertions(+)
>
> diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
> index 01b081f6e7ea..0514523e00cd 100644
> --- a/Documentation/virt/kvm/cpuid.rst
> +++ b/Documentation/virt/kvm/cpuid.rst
> @@ -86,6 +86,11 @@ KVM_FEATURE_PV_SCHED_YIELD        13          guest checks this feature bit
>                                                before using paravirtualized
>                                                sched yield.
>
> +KVM_FEATURE_SEV_LIVE_MIGRATION    14          guest checks this feature bit before
> +                                              using the page encryption state
> +                                              hypercall to notify the page state
> +                                              change
> +
>  KVM_FEATURE_CLOCSOURCE_STABLE_BIT 24          host will warn if no guest-side
>                                                per-cpu warps are expeced in
>                                                kvmclock
> diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
> index 33892036672d..7cd7786bbb03 100644
> --- a/Documentation/virt/kvm/msr.rst
> +++ b/Documentation/virt/kvm/msr.rst
> @@ -319,3 +319,13 @@ data:
>
>         KVM guests can request the host not to poll on HLT, for example if
>         they are performing polling themselves.
> +
> +MSR_KVM_SEV_LIVE_MIG_EN:
> +        0x4b564d06
> +
> +       Control SEV Live Migration features.
> +
> +data:
> +        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature.
> +        Bit 1 enables (1) or disables (0) support for SEV Live Migration extensions.
> +        All other bits are reserved.
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
> index 2a8e0b6b9805..d9d4953b42ad 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -31,6 +31,7 @@
>  #define KVM_FEATURE_PV_SEND_IPI        11
>  #define KVM_FEATURE_POLL_CONTROL       12
>  #define KVM_FEATURE_PV_SCHED_YIELD     13
> +#define KVM_FEATURE_SEV_LIVE_MIGRATION 14
>
>  #define KVM_HINTS_REALTIME      0
>
> @@ -50,6 +51,7 @@
>  #define MSR_KVM_STEAL_TIME  0x4b564d03
>  #define MSR_KVM_PV_EOI_EN      0x4b564d04
>  #define MSR_KVM_POLL_CONTROL   0x4b564d05
> +#define MSR_KVM_SEV_LIVE_MIG_EN        0x4b564d06
>
>  struct kvm_steal_time {
>         __u64 steal;
> @@ -122,4 +124,7 @@ struct kvm_vcpu_pv_apf_data {
>  #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
>  #define KVM_PV_EOI_DISABLED 0x0
>
> +#define KVM_SEV_LIVE_MIGRATION_ENABLED                 (1 << 0)
> +#define KVM_SEV_LIVE_MIGRATION_EXTENSIONS_SUPPORTED    (1 << 1)
> +
>  #endif /* _UAPI_ASM_X86_KVM_PARA_H */
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index c0d7043a0627..6f69c3a47583 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1469,6 +1469,17 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>         return 0;
>  }
>
> +void sev_update_migration_flags(struct kvm *kvm, u64 data)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +
> +       if (!sev_guest(kvm))
> +               return;
> +
> +       if (data & KVM_SEV_LIVE_MIGRATION_ENABLED)
> +               sev->live_migration_enabled = true;
> +}
> +
>  int svm_get_page_enc_bitmap(struct kvm *kvm,
>                                    struct kvm_page_enc_bitmap *bmap)
>  {
> @@ -1481,6 +1492,9 @@ int svm_get_page_enc_bitmap(struct kvm *kvm,
>         if (!sev_guest(kvm))
>                 return -ENOTTY;
>
> +       if (!sev->live_migration_enabled)
> +               return -EINVAL;
> +
>         gfn_start = bmap->start_gfn;
>         gfn_end = gfn_start + bmap->num_pages;
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 442adbbb0641..a99f5457f244 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2633,6 +2633,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>                 svm->msr_decfg = data;
>                 break;
>         }
> +       case MSR_KVM_SEV_LIVE_MIG_EN:
> +               sev_update_migration_flags(vcpu->kvm, data);
> +               break;
>         case MSR_IA32_APICBASE:
>                 if (kvm_vcpu_apicv_active(vcpu))
>                         avic_update_vapic_bar(to_svm(vcpu), data);
> @@ -3493,6 +3496,19 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
>         svm->nrips_enabled = kvm_cpu_cap_has(X86_FEATURE_NRIPS) &&
>                              guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
>
> +        /*
> +         * If SEV guest then enable the Live migration feature.
> +         */
> +        if (sev_guest(vcpu->kvm)) {
> +              struct kvm_cpuid_entry2 *best;
> +
> +              best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
> +              if (!best)
> +                      return;
> +
> +              best->eax |= (1 << KVM_FEATURE_SEV_LIVE_MIGRATION);
> +        }
> +
>         if (!kvm_vcpu_apicv_active(vcpu))
>                 return;
>
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index fd99e0a5417a..77f132a6fead 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -65,6 +65,7 @@ struct kvm_sev_info {
>         int fd;                 /* SEV device fd */
>         unsigned long pages_locked; /* Number of pages locked */
>         struct list_head regions_list;  /* List of registered regions */
> +       bool live_migration_enabled;
>         unsigned long *page_enc_bmap;
>         unsigned long page_enc_bmap_size;
>  };
> @@ -494,5 +495,6 @@ int svm_unregister_enc_region(struct kvm *kvm,
>  void pre_sev_run(struct vcpu_svm *svm, int cpu);
>  int __init sev_hardware_setup(void);
>  void sev_hardware_teardown(void);
> +void sev_update_migration_flags(struct kvm *kvm, u64 data);
>
>  #endif
> --
> 2.17.1
>

Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID.
  2020-05-05 21:20 ` [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
@ 2020-05-30  2:07   ` Steve Rutherford
  2020-05-30  5:51     ` Ashish Kalra
  0 siblings, 1 reply; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:07 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:20 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> Introduce a new AMD Memory Encryption GUID which is currently
> used for defining a new UEFI enviroment variable which indicates
> UEFI/OVMF support for the SEV live migration feature. This variable
> is setup when UEFI/OVMF detects host/hypervisor support for SEV
> live migration and later this variable is read by the kernel using
> EFI runtime services to verify if OVMF supports the live migration
> feature.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  include/linux/efi.h | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/include/linux/efi.h b/include/linux/efi.h
> index 251f1f783cdf..2efb42ccf3a8 100644
> --- a/include/linux/efi.h
> +++ b/include/linux/efi.h
> @@ -358,6 +358,7 @@ void efi_native_runtime_setup(void);
>
>  /* OEM GUIDs */
>  #define DELLEMC_EFI_RCI2_TABLE_GUID            EFI_GUID(0x2d9f28a2, 0xa886, 0x456a,  0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55)
> +#define MEM_ENCRYPT_GUID                       EFI_GUID(0x0cf29b71, 0x9e51, 0x433a,  0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75)
>
>  typedef struct {
>         efi_guid_t guid;
> --
> 2.17.1
>
Have you gotten this GUID upstreamed into edk2?

Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature.
  2020-05-05 21:20 ` [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
@ 2020-05-30  2:08   ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:08 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:20 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> The guest support for detecting and enabling SEV Live migration
> feature uses the following logic :
>
>  - kvm_init_plaform() checks if its booted under the EFI
>
>    - If not EFI,
>
>      i) check for the KVM_FEATURE_CPUID
>
>      ii) if CPUID reports that migration is support then issue wrmsrl
>          to enable the SEV migration support
>
>    - If EFI,
>
>      i) Check the KVM_FEATURE_CPUID.
>
>      ii) If CPUID reports that migration is supported, then reads the UEFI enviroment variable which
>          indicates OVMF support for live migration.
>
>      iii) If variable is set then wrmsr to enable the SEV migration support.
>
> The EFI live migration check is done using a late_initcall() callback.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/include/asm/mem_encrypt.h | 11 ++++++
>  arch/x86/kernel/kvm.c              | 62 ++++++++++++++++++++++++++++++
>  arch/x86/mm/mem_encrypt.c          | 11 ++++++
>  3 files changed, 84 insertions(+)
>
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 848ce43b9040..d10e92ae5ca1 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -20,6 +20,7 @@
>
>  extern u64 sme_me_mask;
>  extern bool sev_enabled;
> +extern bool sev_live_mig_enabled;
>
>  void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
>                          unsigned long decrypted_kernel_vaddr,
> @@ -42,6 +43,8 @@ void __init sme_enable(struct boot_params *bp);
>
>  int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
>  int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
> +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
> +                                           bool enc);
>
>  /* Architecture __weak replacement functions */
>  void __init mem_encrypt_init(void);
> @@ -55,6 +58,7 @@ bool sev_active(void);
>  #else  /* !CONFIG_AMD_MEM_ENCRYPT */
>
>  #define sme_me_mask    0ULL
> +#define sev_live_mig_enabled   false
>
>  static inline void __init sme_early_encrypt(resource_size_t paddr,
>                                             unsigned long size) { }
> @@ -76,6 +80,8 @@ static inline int __init
>  early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
>  static inline int __init
>  early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
> +static inline void __init
> +early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
>
>  #define __bss_decrypted
>
> @@ -102,6 +108,11 @@ static inline u64 sme_get_me_mask(void)
>         return sme_me_mask;
>  }
>
> +static inline bool sev_live_migration_enabled(void)
> +{
> +       return sev_live_mig_enabled;
> +}
> +
>  #endif /* __ASSEMBLY__ */
>
>  #endif /* __X86_MEM_ENCRYPT_H__ */
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 6efe0410fb72..4b29815de873 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -24,6 +24,7 @@
>  #include <linux/debugfs.h>
>  #include <linux/nmi.h>
>  #include <linux/swait.h>
> +#include <linux/efi.h>
>  #include <asm/timer.h>
>  #include <asm/cpu.h>
>  #include <asm/traps.h>
> @@ -403,6 +404,53 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
>         early_set_memory_decrypted((unsigned long) ptr, size);
>  }
>
> +#ifdef CONFIG_EFI
> +static bool setup_kvm_sev_migration(void)
> +{
> +       efi_char16_t efi_Sev_Live_Mig_support_name[] = L"SevLiveMigrationEnabled";
> +       efi_guid_t efi_variable_guid = MEM_ENCRYPT_GUID;
> +       efi_status_t status;
> +       unsigned long size;
> +       bool enabled;
> +
> +       if (!sev_live_migration_enabled())
> +               return false;
> +
> +       size = sizeof(enabled);
> +
> +       if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
> +               pr_info("setup_kvm_sev_migration: no efi\n");
> +               return false;
> +       }
> +
> +       /* Get variable contents into buffer */
> +       status = efi.get_variable(efi_Sev_Live_Mig_support_name,
> +                                 &efi_variable_guid, NULL, &size, &enabled);
> +
> +       if (status == EFI_NOT_FOUND) {
> +               pr_info("setup_kvm_sev_migration: variable not found\n");
> +               return false;
> +       }
> +
> +       if (status != EFI_SUCCESS) {
> +               pr_info("setup_kvm_sev_migration: get_variable fail\n");
> +               return false;
> +       }
> +
> +       if (enabled == 0) {
> +               pr_info("setup_kvm_sev_migration: live migration disabled in OVMF\n");
> +               return false;
> +       }
> +
> +       pr_info("setup_kvm_sev_migration: live migration enabled in OVMF\n");
> +       wrmsrl(MSR_KVM_SEV_LIVE_MIG_EN, KVM_SEV_LIVE_MIGRATION_ENABLED);
> +
> +       return true;
> +}
> +
> +late_initcall(setup_kvm_sev_migration);
> +#endif
> +
>  /*
>   * Iterate through all possible CPUs and map the memory region pointed
>   * by apf_reason, steal_time and kvm_apic_eoi as decrypted at once.
> @@ -725,6 +773,20 @@ static void __init kvm_apic_init(void)
>
>  static void __init kvm_init_platform(void)
>  {
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +       if (sev_active() &&
> +           kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
> +               printk(KERN_INFO "KVM enable live migration\n");
> +               sev_live_mig_enabled = true;
> +               /*
> +                * If not booted using EFI, enable Live migration support.
> +                */
> +               if (!efi_enabled(EFI_BOOT))
> +                       wrmsrl(MSR_KVM_SEV_LIVE_MIG_EN,
> +                              KVM_SEV_LIVE_MIGRATION_ENABLED);
> +       } else
> +               printk(KERN_INFO "KVM enable live migration feature unsupported\n");
> +#endif
>         kvmclock_init();
>         x86_platform.apic_post_init = kvm_apic_init;
>  }
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index c9800fa811f6..f54be71bc75f 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -46,6 +46,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
>
>  bool sev_enabled __section(.data);
>
> +bool sev_live_mig_enabled __section(.data);
> +
>  /* Buffer used for early in-place encryption by BSP, no locking needed */
>  static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
>
> @@ -204,6 +206,9 @@ static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
>         unsigned long sz = npages << PAGE_SHIFT;
>         unsigned long vaddr_end, vaddr_next;
>
> +       if (!sev_live_migration_enabled())
> +               return;
> +
>         vaddr_end = vaddr + sz;
>
>         for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> @@ -374,6 +379,12 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
>         return early_set_memory_enc_dec(vaddr, size, true);
>  }
>
> +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
> +                                       bool enc)
> +{
> +       set_memory_enc_dec_hypercall(vaddr, npages, enc);
> +}
> +
>  /*
>   * SME and SEV are very similar but they are not the same, so there are
>   * times that the kernel will need to distinguish between SME and SEV. The
> --
> 2.17.1
>


Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 16/18] KVM: x86: Mark _bss_decrypted section variables as decrypted in page encryption bitmap.
  2020-05-05 21:20 ` [PATCH v8 16/18] KVM: x86: Mark _bss_decrypted section variables as decrypted in page encryption bitmap Ashish Kalra
@ 2020-05-30  2:08   ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:08 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:20 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> Ensure that _bss_decrypted section variables such as hv_clock_boot and
> wall_clock are marked as decrypted in the page encryption bitmap if
> sev liv migration is supported.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/kernel/kvmclock.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
> index 34b18f6eeb2c..65777bf1218d 100644
> --- a/arch/x86/kernel/kvmclock.c
> +++ b/arch/x86/kernel/kvmclock.c
> @@ -334,6 +334,18 @@ void __init kvmclock_init(void)
>         pr_info("kvm-clock: Using msrs %x and %x",
>                 msr_kvm_system_time, msr_kvm_wall_clock);
>
> +       if (sev_live_migration_enabled()) {
> +               unsigned long nr_pages;
> +               /*
> +                * sizeof(hv_clock_boot) is already PAGE_SIZE aligned
> +                */
> +               early_set_mem_enc_dec_hypercall((unsigned long)hv_clock_boot,
> +                                               1, 0);
> +               nr_pages = DIV_ROUND_UP(sizeof(wall_clock), PAGE_SIZE);
> +               early_set_mem_enc_dec_hypercall((unsigned long)&wall_clock,
> +                                               nr_pages, 0);
> +       }
> +
>         this_cpu_write(hv_clock_per_cpu, &hv_clock_boot[0]);
>         kvm_register_clock("primary cpu clock");
>         pvclock_set_pvti_cpu0_va(hv_clock_boot);
> --
> 2.17.1
>

Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 17/18] KVM: x86: Add kexec support for SEV Live Migration.
  2020-05-05 21:21   ` Ashish Kalra
@ 2020-05-30  2:08     ` Steve Rutherford
  -1 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:08 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh, kexec

On Tue, May 5, 2020 at 2:21 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> Reset the host's page encryption bitmap related to kernel
> specific page encryption status settings before we load a
> new kernel by kexec. We cannot reset the complete
> page encryption bitmap here as we need to retain the
> UEFI/OVMF firmware specific settings.
>
> The host's page encryption bitmap is maintained for the
> guest to keep the encrypted/decrypted state of the guest pages,
> therefore we need to explicitly mark all shared pages as
> encrypted again before rebooting into the new guest kernel.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/kernel/kvm.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 4b29815de873..a8bc30d5b15b 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -35,6 +35,7 @@
>  #include <asm/hypervisor.h>
>  #include <asm/tlb.h>
>  #include <asm/cpuidle_haltpoll.h>
> +#include <asm/e820/api.h>
>
>  static int kvmapf = 1;
>
> @@ -358,6 +359,33 @@ static void kvm_pv_guest_cpu_reboot(void *unused)
>          */
>         if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
>                 wrmsrl(MSR_KVM_PV_EOI_EN, 0);
> +       /*
> +        * Reset the host's page encryption bitmap related to kernel
> +        * specific page encryption status settings before we load a
> +        * new kernel by kexec. NOTE: We cannot reset the complete
> +        * page encryption bitmap here as we need to retain the
> +        * UEFI/OVMF firmware specific settings.
> +        */
> +       if (sev_live_migration_enabled() & (smp_processor_id() == 0)) {
> +               int i;
> +               unsigned long nr_pages;
> +
> +               for (i = 0; i < e820_table->nr_entries; i++) {
> +                       struct e820_entry *entry = &e820_table->entries[i];
> +                       unsigned long start_pfn;
> +                       unsigned long end_pfn;
> +
> +                       if (entry->type != E820_TYPE_RAM)
> +                               continue;
What should the behavior be for other memory types that are not
expected to be mucked with by firmware? Should we avoid resetting the
enc status of pmem/pram pages?

My intuition here is that we should only preserve the enc status of
those bits that are set by the firmware.

> +
> +                       start_pfn = entry->addr >> PAGE_SHIFT;
> +                       end_pfn = (entry->addr + entry->size) >> PAGE_SHIFT;
> +                       nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
> +
> +                       kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
> +                                          entry->addr, nr_pages, 1);
> +               }
> +       }
>         kvm_pv_disable_apf();
>         kvm_disable_steal_time();
>  }
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 17/18] KVM: x86: Add kexec support for SEV Live Migration.
@ 2020-05-30  2:08     ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:08 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Tom Lendacky, David Rientjes, Brijesh Singh, KVM list,
	Joerg Roedel, X86 ML, kexec, LKML, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Borislav Petkov, Venu Busireddy

On Tue, May 5, 2020 at 2:21 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> Reset the host's page encryption bitmap related to kernel
> specific page encryption status settings before we load a
> new kernel by kexec. We cannot reset the complete
> page encryption bitmap here as we need to retain the
> UEFI/OVMF firmware specific settings.
>
> The host's page encryption bitmap is maintained for the
> guest to keep the encrypted/decrypted state of the guest pages,
> therefore we need to explicitly mark all shared pages as
> encrypted again before rebooting into the new guest kernel.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/kernel/kvm.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 4b29815de873..a8bc30d5b15b 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -35,6 +35,7 @@
>  #include <asm/hypervisor.h>
>  #include <asm/tlb.h>
>  #include <asm/cpuidle_haltpoll.h>
> +#include <asm/e820/api.h>
>
>  static int kvmapf = 1;
>
> @@ -358,6 +359,33 @@ static void kvm_pv_guest_cpu_reboot(void *unused)
>          */
>         if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
>                 wrmsrl(MSR_KVM_PV_EOI_EN, 0);
> +       /*
> +        * Reset the host's page encryption bitmap related to kernel
> +        * specific page encryption status settings before we load a
> +        * new kernel by kexec. NOTE: We cannot reset the complete
> +        * page encryption bitmap here as we need to retain the
> +        * UEFI/OVMF firmware specific settings.
> +        */
> +       if (sev_live_migration_enabled() & (smp_processor_id() == 0)) {
> +               int i;
> +               unsigned long nr_pages;
> +
> +               for (i = 0; i < e820_table->nr_entries; i++) {
> +                       struct e820_entry *entry = &e820_table->entries[i];
> +                       unsigned long start_pfn;
> +                       unsigned long end_pfn;
> +
> +                       if (entry->type != E820_TYPE_RAM)
> +                               continue;
What should the behavior be for other memory types that are not
expected to be mucked with by firmware? Should we avoid resetting the
enc status of pmem/pram pages?

My intuition here is that we should only preserve the enc status of
those bits that are set by the firmware.

> +
> +                       start_pfn = entry->addr >> PAGE_SHIFT;
> +                       end_pfn = (entry->addr + entry->size) >> PAGE_SHIFT;
> +                       nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
> +
> +                       kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
> +                                          entry->addr, nr_pages, 1);
> +               }
> +       }
>         kvm_pv_disable_apf();
>         kvm_disable_steal_time();
>  }
> --
> 2.17.1
>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s).
  2020-05-05 21:22 ` [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s) Ashish Kalra
@ 2020-05-30  2:09   ` Steve Rutherford
  2020-12-04 11:11   ` Paolo Bonzini
  2020-12-04 11:22   ` Paolo Bonzini
  2 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-05-30  2:09 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Tue, May 5, 2020 at 2:22 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> For source VM, live migration feature is enabled explicitly
> when the guest is booting, for the incoming VM(s) it is implied.
> This is required for handling A->B->C->... VM migrations case.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/kvm/svm/sev.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 6f69c3a47583..ba7c0ebfa1f3 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1592,6 +1592,13 @@ int svm_set_page_enc_bitmap(struct kvm *kvm,
>         if (ret)
>                 goto unlock;
>
> +       /*
> +        * For source VM, live migration feature is enabled
> +        * explicitly when the guest is booting, for the
> +        * incoming VM(s) it is implied.
> +        */
> +       sev_update_migration_flags(kvm, KVM_SEV_LIVE_MIGRATION_ENABLED);
> +
>         bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
>                     (gfn_end - gfn_start));
>
> --
> 2.17.1
>

Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap.
  2020-05-30  2:07   ` Steve Rutherford
@ 2020-05-30  5:49     ` Ashish Kalra
  0 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-30  5:49 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

Hello Steve,

On Fri, May 29, 2020 at 07:07:33PM -0700, Steve Rutherford wrote:
> On Tue, May 5, 2020 at 2:18 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
> >
> > From: Ashish Kalra <ashish.kalra@amd.com>
> >
> > Add support for static allocation of the unified Page encryption bitmap by
> > extending kvm_arch_commit_memory_region() callack to add svm specific x86_ops
> > which can read the userspace provided memory region/memslots and calculate
> > the amount of guest RAM managed by the KVM and grow the bitmap based
> > on that information, i.e. the highest guest PA that is mapped by a memslot.
> >
> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  1 +
> >  arch/x86/kvm/svm/sev.c          | 35 +++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/svm/svm.c          |  1 +
> >  arch/x86/kvm/svm/svm.h          |  1 +
> >  arch/x86/kvm/x86.c              |  5 +++++
> >  5 files changed, 43 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index fc74144d5ab0..b573ea85b57e 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1254,6 +1254,7 @@ struct kvm_x86_ops {
> >
> >         bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
> >         int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
> > +       void (*commit_memory_region)(struct kvm *kvm, enum kvm_mr_change change);
> >         int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
> >                                   unsigned long sz, unsigned long mode);
> >         int (*get_page_enc_bitmap)(struct kvm *kvm,
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index 30efc1068707..c0d7043a0627 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -1377,6 +1377,41 @@ static int sev_resize_page_enc_bitmap(struct kvm *kvm, unsigned long new_size)
> >         return 0;
> >  }
> >
> > +void svm_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change)
> > +{
> > +       struct kvm_memslots *slots;
> > +       struct kvm_memory_slot *memslot;
> > +       gfn_t start, end = 0;
> > +
> > +       spin_lock(&kvm->mmu_lock);
> > +       if (change == KVM_MR_CREATE) {
> > +               slots = kvm_memslots(kvm);
> > +               kvm_for_each_memslot(memslot, slots) {
> > +                       start = memslot->base_gfn;
> > +                       end = memslot->base_gfn + memslot->npages;
> > +                       /*
> > +                        * KVM memslots is a sorted list, starting with
> > +                        * the highest mapped guest PA, so pick the topmost
> > +                        * valid guest PA.
> > +                        */
> > +                       if (memslot->npages)
> > +                               break;
> > +               }
> > +       }
> > +       spin_unlock(&kvm->mmu_lock);
> > +
> > +       if (end) {
> > +               /*
> > +                * NORE: This callback is invoked in vm ioctl
> > +                * set_user_memory_region, hence we can use a
> > +                * mutex here.
> > +                */
> > +               mutex_lock(&kvm->lock);
> > +               sev_resize_page_enc_bitmap(kvm, end);
> > +               mutex_unlock(&kvm->lock);
> > +       }
> > +}
> > +
> >  int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
> >                                   unsigned long npages, unsigned long enc)
> >  {
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 501e82f5593c..442adbbb0641 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -4015,6 +4015,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
> >
> >         .check_nested_events = svm_check_nested_events,
> >
> > +       .commit_memory_region = svm_commit_memory_region,
> >         .page_enc_status_hc = svm_page_enc_status_hc,
> >         .get_page_enc_bitmap = svm_get_page_enc_bitmap,
> >         .set_page_enc_bitmap = svm_set_page_enc_bitmap,
> > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > index 2ebdcce50312..fd99e0a5417a 100644
> > --- a/arch/x86/kvm/svm/svm.h
> > +++ b/arch/x86/kvm/svm/svm.h
> > @@ -406,6 +406,7 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
> >                                   unsigned long npages, unsigned long enc);
> >  int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
> >  int svm_set_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
> > +void svm_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change);
> >
> >  /* avic.c */
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index c4166d7a0493..8938de868d42 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -10133,6 +10133,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> >                 kvm_mmu_change_mmu_pages(kvm,
> >                                 kvm_mmu_calculate_default_mmu_pages(kvm));
> >
> > +       if (change == KVM_MR_CREATE || change == KVM_MR_DELETE) {
> > +               if (kvm_x86_ops.commit_memory_region)
> > +                       kvm_x86_ops.commit_memory_region(kvm, change);
> Why not just call this every time (if it exists) and have the
> kvm_x86_op determine if it should do anything?
> 
> It seems like it's a nop anyway unless you are doing a create.
> 

Yes, this makes sense. 

I will call it unconditionally it it exits and let the callback
determine what to do eventually with it.

Thanks,
Ashish

> > +       }
> > +
> >         /*
> >          * Dirty logging tracks sptes in 4k granularity, meaning that large
> >          * sptes have to be split.  If live migration is successful, the guest
> > --
> > 2.17.1
> >

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID.
  2020-05-30  2:07   ` Steve Rutherford
@ 2020-05-30  5:51     ` Ashish Kalra
  0 siblings, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-05-30  5:51 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

Hello Steve,

On Fri, May 29, 2020 at 07:07:56PM -0700, Steve Rutherford wrote:
> On Tue, May 5, 2020 at 2:20 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
> >
> > From: Ashish Kalra <ashish.kalra@amd.com>
> >
> > Introduce a new AMD Memory Encryption GUID which is currently
> > used for defining a new UEFI enviroment variable which indicates
> > UEFI/OVMF support for the SEV live migration feature. This variable
> > is setup when UEFI/OVMF detects host/hypervisor support for SEV
> > live migration and later this variable is read by the kernel using
> > EFI runtime services to verify if OVMF supports the live migration
> > feature.
> >
> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> > ---
> >  include/linux/efi.h | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/include/linux/efi.h b/include/linux/efi.h
> > index 251f1f783cdf..2efb42ccf3a8 100644
> > --- a/include/linux/efi.h
> > +++ b/include/linux/efi.h
> > @@ -358,6 +358,7 @@ void efi_native_runtime_setup(void);
> >
> >  /* OEM GUIDs */
> >  #define DELLEMC_EFI_RCI2_TABLE_GUID            EFI_GUID(0x2d9f28a2, 0xa886, 0x456a,  0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55)
> > +#define MEM_ENCRYPT_GUID                       EFI_GUID(0x0cf29b71, 0x9e51, 0x433a,  0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75)
> >
> >  typedef struct {
> >         efi_guid_t guid;
> > --
> > 2.17.1
> >
> Have you gotten this GUID upstreamed into edk2?
> 

Not yet.

This patch and the other OVMF patches are ready to be sent for
upstreaming, i was waiting for this kernel patch-set to be
accepted and upstreamed. 

> Reviewed-by: Steve Rutherford <srutherford@google.com>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 00/18] Add AMD SEV guest live migration support
  2020-05-18 19:07 ` [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
@ 2020-06-01 20:02   ` Steve Rutherford
  2020-06-03 22:14     ` Ashish Kalra
  0 siblings, 1 reply; 59+ messages in thread
From: Steve Rutherford @ 2020-06-01 20:02 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

On Mon, May 18, 2020 at 12:07 PM Ashish Kalra <ashish.kalra@amd.com> wrote:
>
> Hello All,
>
> Any other feedback, review or comments on this patch-set ?
>
> Thanks,
> Ashish
>
> On Tue, May 05, 2020 at 09:13:49PM +0000, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@amd.com>
> >
> > The series add support for AMD SEV guest live migration commands. To protect the
> > confidentiality of an SEV protected guest memory while in transit we need to
> > use the SEV commands defined in SEV API spec [1].
> >
> > SEV guest VMs have the concept of private and shared memory. Private memory
> > is encrypted with the guest-specific key, while shared memory may be encrypted
> > with hypervisor key. The commands provided by the SEV FW are meant to be used
> > for the private memory only. The patch series introduces a new hypercall.
> > The guest OS can use this hypercall to notify the page encryption status.
> > If the page is encrypted with guest specific-key then we use SEV command during
> > the migration. If page is not encrypted then fallback to default.
> >
> > The patch adds new ioctls KVM_{SET,GET}_PAGE_ENC_BITMAP. The ioctl can be used
> > by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
> > during the migration to know whether the page is encrypted.
> >
> > This section descibes how the SEV live migration feature is negotiated
> > between the host and guest, the host indicates this feature support via
> > KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> > sets a UEFI enviroment variable indicating OVMF support for live
> > migration, the guest kernel also detects the host support for this
> > feature via cpuid and in case of an EFI boot verifies if OVMF also
> > supports this feature by getting the UEFI enviroment variable and if it
> > set then enables live migration feature on host by writing to a custom
> > MSR, if not booted under EFI, then it simply enables the feature by
> > again writing to the custom MSR. The host returns error as part of
> > SET_PAGE_ENC_BITMAP ioctl if guest has not enabled live migration.
> >
> > A branch containing these patches is available here:
> > https://github.com/AMDESE/linux/tree/sev-migration-v8
> >
> > [1] https://developer.amd.com/wp-content/resources/55766.PDF
> >
> > Changes since v7:
> > - Removed the hypervisor specific hypercall/paravirt callback for
> >   SEV live migration and moved back to calling kvm_sev_hypercall3
> >   directly.
> > - Fix build errors as
> >   Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
> >   build error when CONFIG_HYPERVISOR_GUEST=y and
> >   CONFIG_AMD_MEM_ENCRYPT=n.
> > - Implicitly enabled live migration for incoming VM(s) to handle
> >   A->B->C->... VM migrations.
> > - Fixed Documentation as per comments on v6 patches.
> > - Fixed error return path in sev_send_update_data() as per comments
> >   on v6 patches.
> >
> > Changes since v6:
> > - Rebasing to mainline and refactoring to the new split SVM
> >   infrastructre.
> > - Move to static allocation of the unified Page Encryption bitmap
> >   instead of the dynamic resizing of the bitmap, the static allocation
> >   is done implicitly by extending kvm_arch_commit_memory_region() callack
> >   to add svm specific x86_ops which can read the userspace provided memory
> >   region/memslots and calculate the amount of guest RAM managed by the KVM
> >   and grow the bitmap.
> > - Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
> >   of simply clearing specific bits.
> > - Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
> >   KVM_SET_PAGE_ENC_BITMAP.
> > - Extended guest support for enabling Live Migration feature by adding a
> >   check for UEFI environment variable indicating OVMF support for Live
> >   Migration feature and additionally checking for KVM capability for the
> >   same feature. If not booted under EFI, then we simply check for KVM
> >   capability.
> > - Add hypervisor specific hypercall for SEV live migration by adding
> >   a new paravirt callback as part of x86_hyper_runtime.
> >   (x86 hypervisor specific runtime callbacks)
> > - Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code
> >   and adding check for SEV live migration enabled by guest in the
> >   KVM_GET_PAGE_ENC_BITMAP ioctl.
> > - Instead of the complete __bss_decrypted section, only specific variables
> >   such as hv_clock_boot and wall_clock are marked as decrypted in the
> >   page encryption bitmap
> >
> > Changes since v5:
> > - Fix build errors as
> >   Reported-by: kbuild test robot <lkp@intel.com>
> >
> > Changes since v4:
> > - Host support has been added to extend KVM capabilities/feature bits to
> >   include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
> >   query for host-side support for SEV live migration and a new custom MSR
> >   MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
> >   migration feature.
> > - Ensure that _bss_decrypted section is marked as decrypted in the
> >   page encryption bitmap.
> > - Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
> >   as per the number of pages being requested by the user. Ensure that
> >   we only copy bmap->num_pages bytes in the userspace buffer, if
> >   bmap->num_pages is not byte aligned we read the trailing bits
> >   from the userspace and copy those bits as is. This fixes guest
> >   page(s) corruption issues observed after migration completion.
> > - Add kexec support for SEV Live Migration to reset the host's
> >   page encryption bitmap related to kernel specific page encryption
> >   status settings before we load a new kernel by kexec. We cannot
> >   reset the complete page encryption bitmap here as we need to
> >   retain the UEFI/OVMF firmware specific settings.
> >
> > Changes since v3:
> > - Rebasing to mainline and testing.
> > - Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the
> >   page encryption bitmap on a guest reboot event.
> > - Adding a more reliable sanity check for GPA range being passed to
> >   the hypercall to ensure that guest MMIO ranges are also marked
> >   in the page encryption bitmap.
> >
> > Changes since v2:
> >  - reset the page encryption bitmap on vcpu reboot
> >
> > Changes since v1:
> >  - Add support to share the page encryption between the source and target
> >    machine.
> >  - Fix review feedbacks from Tom Lendacky.
> >  - Add check to limit the session blob length.
> >  - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
> >    the memory slot when querying the bitmap.
> >
> > Ashish Kalra (7):
> >   KVM: SVM: Add support for static allocation of unified Page Encryption
> >     Bitmap.
> >   KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
> >     Custom MSR.
> >   EFI: Introduce the new AMD Memory Encryption GUID.
> >   KVM: x86: Add guest support for detecting and enabling SEV Live
> >     Migration feature.
> >   KVM: x86: Mark _bss_decrypted section variables as decrypted in page
> >     encryption bitmap.
> >   KVM: x86: Add kexec support for SEV Live Migration.
> >   KVM: SVM: Enable SEV live migration feature implicitly on Incoming
> >     VM(s).
> >
> > Brijesh Singh (11):
> >   KVM: SVM: Add KVM_SEV SEND_START command
> >   KVM: SVM: Add KVM_SEND_UPDATE_DATA command
> >   KVM: SVM: Add KVM_SEV_SEND_FINISH command
> >   KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
> >   KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
> >   KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
> >   KVM: x86: Add AMD SEV specific Hypercall3
> >   KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
> >   KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
> >   mm: x86: Invoke hypercall when page encryption status is changed
> >   KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl
> >
> >  .../virt/kvm/amd-memory-encryption.rst        | 120 +++
> >  Documentation/virt/kvm/api.rst                |  71 ++
> >  Documentation/virt/kvm/cpuid.rst              |   5 +
> >  Documentation/virt/kvm/hypercalls.rst         |  15 +
> >  Documentation/virt/kvm/msr.rst                |  10 +
> >  arch/x86/include/asm/kvm_host.h               |   7 +
> >  arch/x86/include/asm/kvm_para.h               |  12 +
> >  arch/x86/include/asm/mem_encrypt.h            |  11 +
> >  arch/x86/include/asm/paravirt.h               |  10 +
> >  arch/x86/include/asm/paravirt_types.h         |   2 +
> >  arch/x86/include/uapi/asm/kvm_para.h          |   5 +
> >  arch/x86/kernel/kvm.c                         |  90 +++
> >  arch/x86/kernel/kvmclock.c                    |  12 +
> >  arch/x86/kernel/paravirt.c                    |   1 +
> >  arch/x86/kvm/svm/sev.c                        | 732 +++++++++++++++++-
> >  arch/x86/kvm/svm/svm.c                        |  21 +
> >  arch/x86/kvm/svm/svm.h                        |   9 +
> >  arch/x86/kvm/vmx/vmx.c                        |   1 +
> >  arch/x86/kvm/x86.c                            |  35 +
> >  arch/x86/mm/mem_encrypt.c                     |  68 +-
> >  arch/x86/mm/pat/set_memory.c                  |   7 +
> >  include/linux/efi.h                           |   1 +
> >  include/linux/psp-sev.h                       |   8 +-
> >  include/uapi/linux/kvm.h                      |  52 ++
> >  include/uapi/linux/kvm_para.h                 |   1 +
> >  25 files changed, 1297 insertions(+), 9 deletions(-)
> >
> > --
> > 2.17.1
> >

Hey all,
These patches look pretty reasonable at this point. What's the next
step for getting them merged?

Thanks,
Steve

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 00/18] Add AMD SEV guest live migration support
  2020-06-01 20:02   ` Steve Rutherford
@ 2020-06-03 22:14     ` Ashish Kalra
  2020-08-05 18:29       ` Steve Rutherford
  0 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-06-03 22:14 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

Hello Steve,

On Mon, Jun 01, 2020 at 01:02:23PM -0700, Steve Rutherford wrote:
> On Mon, May 18, 2020 at 12:07 PM Ashish Kalra <ashish.kalra@amd.com> wrote:
> >
> > Hello All,
> >
> > Any other feedback, review or comments on this patch-set ?
> >
> > Thanks,
> > Ashish
> >
> > On Tue, May 05, 2020 at 09:13:49PM +0000, Ashish Kalra wrote:
> > > From: Ashish Kalra <ashish.kalra@amd.com>
> > >
> > > The series add support for AMD SEV guest live migration commands. To protect the
> > > confidentiality of an SEV protected guest memory while in transit we need to
> > > use the SEV commands defined in SEV API spec [1].
> > >
> > > SEV guest VMs have the concept of private and shared memory. Private memory
> > > is encrypted with the guest-specific key, while shared memory may be encrypted
> > > with hypervisor key. The commands provided by the SEV FW are meant to be used
> > > for the private memory only. The patch series introduces a new hypercall.
> > > The guest OS can use this hypercall to notify the page encryption status.
> > > If the page is encrypted with guest specific-key then we use SEV command during
> > > the migration. If page is not encrypted then fallback to default.
> > >
> > > The patch adds new ioctls KVM_{SET,GET}_PAGE_ENC_BITMAP. The ioctl can be used
> > > by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
> > > during the migration to know whether the page is encrypted.
> > >
> > > This section descibes how the SEV live migration feature is negotiated
> > > between the host and guest, the host indicates this feature support via
> > > KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> > > sets a UEFI enviroment variable indicating OVMF support for live
> > > migration, the guest kernel also detects the host support for this
> > > feature via cpuid and in case of an EFI boot verifies if OVMF also
> > > supports this feature by getting the UEFI enviroment variable and if it
> > > set then enables live migration feature on host by writing to a custom
> > > MSR, if not booted under EFI, then it simply enables the feature by
> > > again writing to the custom MSR. The host returns error as part of
> > > SET_PAGE_ENC_BITMAP ioctl if guest has not enabled live migration.
> > >
> > > A branch containing these patches is available here:
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2Flinux%2Ftree%2Fsev-migration-v8&amp;data=02%7C01%7Cashish.kalra%40amd.com%7Cb7da54c6f7784a548ed208d80666c99b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637266386411473155&amp;sdata=igztZXTZl1i18e5T4DTlNJw07h6z3aBNCAD6%2BE7r9Ik%3D&amp;reserved=0
> > >
> > > [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeveloper.amd.com%2Fwp-content%2Fresources%2F55766.PDF&amp;data=02%7C01%7Cashish.kalra%40amd.com%7Cb7da54c6f7784a548ed208d80666c99b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637266386411473155&amp;sdata=GBgV6HAd2AXZzjK3hp%2F396tDaHlYtN%2FL3Zfny3GaSoU%3D&amp;reserved=0
> > >
> > > Changes since v7:
> > > - Removed the hypervisor specific hypercall/paravirt callback for
> > >   SEV live migration and moved back to calling kvm_sev_hypercall3
> > >   directly.
> > > - Fix build errors as
> > >   Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
> > >   build error when CONFIG_HYPERVISOR_GUEST=y and
> > >   CONFIG_AMD_MEM_ENCRYPT=n.
> > > - Implicitly enabled live migration for incoming VM(s) to handle
> > >   A->B->C->... VM migrations.
> > > - Fixed Documentation as per comments on v6 patches.
> > > - Fixed error return path in sev_send_update_data() as per comments
> > >   on v6 patches.
> > >
> > > Changes since v6:
> > > - Rebasing to mainline and refactoring to the new split SVM
> > >   infrastructre.
> > > - Move to static allocation of the unified Page Encryption bitmap
> > >   instead of the dynamic resizing of the bitmap, the static allocation
> > >   is done implicitly by extending kvm_arch_commit_memory_region() callack
> > >   to add svm specific x86_ops which can read the userspace provided memory
> > >   region/memslots and calculate the amount of guest RAM managed by the KVM
> > >   and grow the bitmap.
> > > - Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
> > >   of simply clearing specific bits.
> > > - Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
> > >   KVM_SET_PAGE_ENC_BITMAP.
> > > - Extended guest support for enabling Live Migration feature by adding a
> > >   check for UEFI environment variable indicating OVMF support for Live
> > >   Migration feature and additionally checking for KVM capability for the
> > >   same feature. If not booted under EFI, then we simply check for KVM
> > >   capability.
> > > - Add hypervisor specific hypercall for SEV live migration by adding
> > >   a new paravirt callback as part of x86_hyper_runtime.
> > >   (x86 hypervisor specific runtime callbacks)
> > > - Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code
> > >   and adding check for SEV live migration enabled by guest in the
> > >   KVM_GET_PAGE_ENC_BITMAP ioctl.
> > > - Instead of the complete __bss_decrypted section, only specific variables
> > >   such as hv_clock_boot and wall_clock are marked as decrypted in the
> > >   page encryption bitmap
> > >
> > > Changes since v5:
> > > - Fix build errors as
> > >   Reported-by: kbuild test robot <lkp@intel.com>
> > >
> > > Changes since v4:
> > > - Host support has been added to extend KVM capabilities/feature bits to
> > >   include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
> > >   query for host-side support for SEV live migration and a new custom MSR
> > >   MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
> > >   migration feature.
> > > - Ensure that _bss_decrypted section is marked as decrypted in the
> > >   page encryption bitmap.
> > > - Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
> > >   as per the number of pages being requested by the user. Ensure that
> > >   we only copy bmap->num_pages bytes in the userspace buffer, if
> > >   bmap->num_pages is not byte aligned we read the trailing bits
> > >   from the userspace and copy those bits as is. This fixes guest
> > >   page(s) corruption issues observed after migration completion.
> > > - Add kexec support for SEV Live Migration to reset the host's
> > >   page encryption bitmap related to kernel specific page encryption
> > >   status settings before we load a new kernel by kexec. We cannot
> > >   reset the complete page encryption bitmap here as we need to
> > >   retain the UEFI/OVMF firmware specific settings.
> > >
> > > Changes since v3:
> > > - Rebasing to mainline and testing.
> > > - Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the
> > >   page encryption bitmap on a guest reboot event.
> > > - Adding a more reliable sanity check for GPA range being passed to
> > >   the hypercall to ensure that guest MMIO ranges are also marked
> > >   in the page encryption bitmap.
> > >
> > > Changes since v2:
> > >  - reset the page encryption bitmap on vcpu reboot
> > >
> > > Changes since v1:
> > >  - Add support to share the page encryption between the source and target
> > >    machine.
> > >  - Fix review feedbacks from Tom Lendacky.
> > >  - Add check to limit the session blob length.
> > >  - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
> > >    the memory slot when querying the bitmap.
> > >
> > > Ashish Kalra (7):
> > >   KVM: SVM: Add support for static allocation of unified Page Encryption
> > >     Bitmap.
> > >   KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
> > >     Custom MSR.
> > >   EFI: Introduce the new AMD Memory Encryption GUID.
> > >   KVM: x86: Add guest support for detecting and enabling SEV Live
> > >     Migration feature.
> > >   KVM: x86: Mark _bss_decrypted section variables as decrypted in page
> > >     encryption bitmap.
> > >   KVM: x86: Add kexec support for SEV Live Migration.
> > >   KVM: SVM: Enable SEV live migration feature implicitly on Incoming
> > >     VM(s).
> > >
> > > Brijesh Singh (11):
> > >   KVM: SVM: Add KVM_SEV SEND_START command
> > >   KVM: SVM: Add KVM_SEND_UPDATE_DATA command
> > >   KVM: SVM: Add KVM_SEV_SEND_FINISH command
> > >   KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
> > >   KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
> > >   KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
> > >   KVM: x86: Add AMD SEV specific Hypercall3
> > >   KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
> > >   KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
> > >   mm: x86: Invoke hypercall when page encryption status is changed
> > >   KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl
> > >
> > >  .../virt/kvm/amd-memory-encryption.rst        | 120 +++
> > >  Documentation/virt/kvm/api.rst                |  71 ++
> > >  Documentation/virt/kvm/cpuid.rst              |   5 +
> > >  Documentation/virt/kvm/hypercalls.rst         |  15 +
> > >  Documentation/virt/kvm/msr.rst                |  10 +
> > >  arch/x86/include/asm/kvm_host.h               |   7 +
> > >  arch/x86/include/asm/kvm_para.h               |  12 +
> > >  arch/x86/include/asm/mem_encrypt.h            |  11 +
> > >  arch/x86/include/asm/paravirt.h               |  10 +
> > >  arch/x86/include/asm/paravirt_types.h         |   2 +
> > >  arch/x86/include/uapi/asm/kvm_para.h          |   5 +
> > >  arch/x86/kernel/kvm.c                         |  90 +++
> > >  arch/x86/kernel/kvmclock.c                    |  12 +
> > >  arch/x86/kernel/paravirt.c                    |   1 +
> > >  arch/x86/kvm/svm/sev.c                        | 732 +++++++++++++++++-
> > >  arch/x86/kvm/svm/svm.c                        |  21 +
> > >  arch/x86/kvm/svm/svm.h                        |   9 +
> > >  arch/x86/kvm/vmx/vmx.c                        |   1 +
> > >  arch/x86/kvm/x86.c                            |  35 +
> > >  arch/x86/mm/mem_encrypt.c                     |  68 +-
> > >  arch/x86/mm/pat/set_memory.c                  |   7 +
> > >  include/linux/efi.h                           |   1 +
> > >  include/linux/psp-sev.h                       |   8 +-
> > >  include/uapi/linux/kvm.h                      |  52 ++
> > >  include/uapi/linux/kvm_para.h                 |   1 +
> > >  25 files changed, 1297 insertions(+), 9 deletions(-)
> > >
> > > --
> > > 2.17.1
> > >
> 
> Hey all,
> These patches look pretty reasonable at this point. What's the next
> step for getting them merged?

I believe i have incorporated all your main comments and feedback, for
example, no more dynamic resizing of the page encryption bitmap and
static allocation of the same, fixing SET_PAGE_ENC_BITMAP ioctl to set
the whole bitmap and also merging RESET_PAGE_ENC_BITMAP ioctl into it
and other stuff like kexec support, etc.

I know you have some additional comments and i am waiting for some more
feedback and comments from others on the mailing list.

But otherwise i believe these patches are fully ready to be merged at
this point and i am looking forward to the same.

Thanks,
Ashish

> Thanks,
> Steve

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 00/18] Add AMD SEV guest live migration support
  2020-06-03 22:14     ` Ashish Kalra
@ 2020-08-05 18:29       ` Steve Rutherford
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Rutherford @ 2020-08-05 18:29 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, David Rientjes, Venu Busireddy, Brijesh Singh

Are these likely to get merged into 5.9?


On Wed, Jun 3, 2020 at 3:14 PM Ashish Kalra <ashish.kalra@amd.com> wrote:
>
> Hello Steve,
>
> On Mon, Jun 01, 2020 at 01:02:23PM -0700, Steve Rutherford wrote:
> > On Mon, May 18, 2020 at 12:07 PM Ashish Kalra <ashish.kalra@amd.com> wrote:
> > >
> > > Hello All,
> > >
> > > Any other feedback, review or comments on this patch-set ?
> > >
> > > Thanks,
> > > Ashish
> > >
> > > On Tue, May 05, 2020 at 09:13:49PM +0000, Ashish Kalra wrote:
> > > > From: Ashish Kalra <ashish.kalra@amd.com>
> > > >
> > > > The series add support for AMD SEV guest live migration commands. To protect the
> > > > confidentiality of an SEV protected guest memory while in transit we need to
> > > > use the SEV commands defined in SEV API spec [1].
> > > >
> > > > SEV guest VMs have the concept of private and shared memory. Private memory
> > > > is encrypted with the guest-specific key, while shared memory may be encrypted
> > > > with hypervisor key. The commands provided by the SEV FW are meant to be used
> > > > for the private memory only. The patch series introduces a new hypercall.
> > > > The guest OS can use this hypercall to notify the page encryption status.
> > > > If the page is encrypted with guest specific-key then we use SEV command during
> > > > the migration. If page is not encrypted then fallback to default.
> > > >
> > > > The patch adds new ioctls KVM_{SET,GET}_PAGE_ENC_BITMAP. The ioctl can be used
> > > > by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
> > > > during the migration to know whether the page is encrypted.
> > > >
> > > > This section descibes how the SEV live migration feature is negotiated
> > > > between the host and guest, the host indicates this feature support via
> > > > KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> > > > sets a UEFI enviroment variable indicating OVMF support for live
> > > > migration, the guest kernel also detects the host support for this
> > > > feature via cpuid and in case of an EFI boot verifies if OVMF also
> > > > supports this feature by getting the UEFI enviroment variable and if it
> > > > set then enables live migration feature on host by writing to a custom
> > > > MSR, if not booted under EFI, then it simply enables the feature by
> > > > again writing to the custom MSR. The host returns error as part of
> > > > SET_PAGE_ENC_BITMAP ioctl if guest has not enabled live migration.
> > > >
> > > > A branch containing these patches is available here:
> > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2Flinux%2Ftree%2Fsev-migration-v8&amp;data=02%7C01%7Cashish.kalra%40amd.com%7Cb7da54c6f7784a548ed208d80666c99b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637266386411473155&amp;sdata=igztZXTZl1i18e5T4DTlNJw07h6z3aBNCAD6%2BE7r9Ik%3D&amp;reserved=0
> > > >
> > > > [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeveloper.amd.com%2Fwp-content%2Fresources%2F55766.PDF&amp;data=02%7C01%7Cashish.kalra%40amd.com%7Cb7da54c6f7784a548ed208d80666c99b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637266386411473155&amp;sdata=GBgV6HAd2AXZzjK3hp%2F396tDaHlYtN%2FL3Zfny3GaSoU%3D&amp;reserved=0
> > > >
> > > > Changes since v7:
> > > > - Removed the hypervisor specific hypercall/paravirt callback for
> > > >   SEV live migration and moved back to calling kvm_sev_hypercall3
> > > >   directly.
> > > > - Fix build errors as
> > > >   Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
> > > >   build error when CONFIG_HYPERVISOR_GUEST=y and
> > > >   CONFIG_AMD_MEM_ENCRYPT=n.
> > > > - Implicitly enabled live migration for incoming VM(s) to handle
> > > >   A->B->C->... VM migrations.
> > > > - Fixed Documentation as per comments on v6 patches.
> > > > - Fixed error return path in sev_send_update_data() as per comments
> > > >   on v6 patches.
> > > >
> > > > Changes since v6:
> > > > - Rebasing to mainline and refactoring to the new split SVM
> > > >   infrastructre.
> > > > - Move to static allocation of the unified Page Encryption bitmap
> > > >   instead of the dynamic resizing of the bitmap, the static allocation
> > > >   is done implicitly by extending kvm_arch_commit_memory_region() callack
> > > >   to add svm specific x86_ops which can read the userspace provided memory
> > > >   region/memslots and calculate the amount of guest RAM managed by the KVM
> > > >   and grow the bitmap.
> > > > - Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
> > > >   of simply clearing specific bits.
> > > > - Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
> > > >   KVM_SET_PAGE_ENC_BITMAP.
> > > > - Extended guest support for enabling Live Migration feature by adding a
> > > >   check for UEFI environment variable indicating OVMF support for Live
> > > >   Migration feature and additionally checking for KVM capability for the
> > > >   same feature. If not booted under EFI, then we simply check for KVM
> > > >   capability.
> > > > - Add hypervisor specific hypercall for SEV live migration by adding
> > > >   a new paravirt callback as part of x86_hyper_runtime.
> > > >   (x86 hypervisor specific runtime callbacks)
> > > > - Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code
> > > >   and adding check for SEV live migration enabled by guest in the
> > > >   KVM_GET_PAGE_ENC_BITMAP ioctl.
> > > > - Instead of the complete __bss_decrypted section, only specific variables
> > > >   such as hv_clock_boot and wall_clock are marked as decrypted in the
> > > >   page encryption bitmap
> > > >
> > > > Changes since v5:
> > > > - Fix build errors as
> > > >   Reported-by: kbuild test robot <lkp@intel.com>
> > > >
> > > > Changes since v4:
> > > > - Host support has been added to extend KVM capabilities/feature bits to
> > > >   include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
> > > >   query for host-side support for SEV live migration and a new custom MSR
> > > >   MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
> > > >   migration feature.
> > > > - Ensure that _bss_decrypted section is marked as decrypted in the
> > > >   page encryption bitmap.
> > > > - Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
> > > >   as per the number of pages being requested by the user. Ensure that
> > > >   we only copy bmap->num_pages bytes in the userspace buffer, if
> > > >   bmap->num_pages is not byte aligned we read the trailing bits
> > > >   from the userspace and copy those bits as is. This fixes guest
> > > >   page(s) corruption issues observed after migration completion.
> > > > - Add kexec support for SEV Live Migration to reset the host's
> > > >   page encryption bitmap related to kernel specific page encryption
> > > >   status settings before we load a new kernel by kexec. We cannot
> > > >   reset the complete page encryption bitmap here as we need to
> > > >   retain the UEFI/OVMF firmware specific settings.
> > > >
> > > > Changes since v3:
> > > > - Rebasing to mainline and testing.
> > > > - Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the
> > > >   page encryption bitmap on a guest reboot event.
> > > > - Adding a more reliable sanity check for GPA range being passed to
> > > >   the hypercall to ensure that guest MMIO ranges are also marked
> > > >   in the page encryption bitmap.
> > > >
> > > > Changes since v2:
> > > >  - reset the page encryption bitmap on vcpu reboot
> > > >
> > > > Changes since v1:
> > > >  - Add support to share the page encryption between the source and target
> > > >    machine.
> > > >  - Fix review feedbacks from Tom Lendacky.
> > > >  - Add check to limit the session blob length.
> > > >  - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
> > > >    the memory slot when querying the bitmap.
> > > >
> > > > Ashish Kalra (7):
> > > >   KVM: SVM: Add support for static allocation of unified Page Encryption
> > > >     Bitmap.
> > > >   KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
> > > >     Custom MSR.
> > > >   EFI: Introduce the new AMD Memory Encryption GUID.
> > > >   KVM: x86: Add guest support for detecting and enabling SEV Live
> > > >     Migration feature.
> > > >   KVM: x86: Mark _bss_decrypted section variables as decrypted in page
> > > >     encryption bitmap.
> > > >   KVM: x86: Add kexec support for SEV Live Migration.
> > > >   KVM: SVM: Enable SEV live migration feature implicitly on Incoming
> > > >     VM(s).
> > > >
> > > > Brijesh Singh (11):
> > > >   KVM: SVM: Add KVM_SEV SEND_START command
> > > >   KVM: SVM: Add KVM_SEND_UPDATE_DATA command
> > > >   KVM: SVM: Add KVM_SEV_SEND_FINISH command
> > > >   KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
> > > >   KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
> > > >   KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
> > > >   KVM: x86: Add AMD SEV specific Hypercall3
> > > >   KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
> > > >   KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
> > > >   mm: x86: Invoke hypercall when page encryption status is changed
> > > >   KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl
> > > >
> > > >  .../virt/kvm/amd-memory-encryption.rst        | 120 +++
> > > >  Documentation/virt/kvm/api.rst                |  71 ++
> > > >  Documentation/virt/kvm/cpuid.rst              |   5 +
> > > >  Documentation/virt/kvm/hypercalls.rst         |  15 +
> > > >  Documentation/virt/kvm/msr.rst                |  10 +
> > > >  arch/x86/include/asm/kvm_host.h               |   7 +
> > > >  arch/x86/include/asm/kvm_para.h               |  12 +
> > > >  arch/x86/include/asm/mem_encrypt.h            |  11 +
> > > >  arch/x86/include/asm/paravirt.h               |  10 +
> > > >  arch/x86/include/asm/paravirt_types.h         |   2 +
> > > >  arch/x86/include/uapi/asm/kvm_para.h          |   5 +
> > > >  arch/x86/kernel/kvm.c                         |  90 +++
> > > >  arch/x86/kernel/kvmclock.c                    |  12 +
> > > >  arch/x86/kernel/paravirt.c                    |   1 +
> > > >  arch/x86/kvm/svm/sev.c                        | 732 +++++++++++++++++-
> > > >  arch/x86/kvm/svm/svm.c                        |  21 +
> > > >  arch/x86/kvm/svm/svm.h                        |   9 +
> > > >  arch/x86/kvm/vmx/vmx.c                        |   1 +
> > > >  arch/x86/kvm/x86.c                            |  35 +
> > > >  arch/x86/mm/mem_encrypt.c                     |  68 +-
> > > >  arch/x86/mm/pat/set_memory.c                  |   7 +
> > > >  include/linux/efi.h                           |   1 +
> > > >  include/linux/psp-sev.h                       |   8 +-
> > > >  include/uapi/linux/kvm.h                      |  52 ++
> > > >  include/uapi/linux/kvm_para.h                 |   1 +
> > > >  25 files changed, 1297 insertions(+), 9 deletions(-)
> > > >
> > > > --
> > > > 2.17.1
> > > >
> >
> > Hey all,
> > These patches look pretty reasonable at this point. What's the next
> > step for getting them merged?
>
> I believe i have incorporated all your main comments and feedback, for
> example, no more dynamic resizing of the page encryption bitmap and
> static allocation of the same, fixing SET_PAGE_ENC_BITMAP ioctl to set
> the whole bitmap and also merging RESET_PAGE_ENC_BITMAP ioctl into it
> and other stuff like kexec support, etc.
>
> I know you have some additional comments and i am waiting for some more
> feedback and comments from others on the mailing list.
>
> But otherwise i believe these patches are fully ready to be merged at
> this point and i am looking forward to the same.
>
> Thanks,
> Ashish
>
> > Thanks,
> > Steve

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap.
  2020-05-05 21:18 ` [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap Ashish Kalra
  2020-05-30  2:07   ` Steve Rutherford
@ 2020-12-04 11:08   ` Paolo Bonzini
  2020-12-04 21:38     ` Ashish Kalra
  1 sibling, 1 reply; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-04 11:08 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On 05/05/20 23:18, Ashish Kalra wrote:
> Add support for static 
> allocation of the unified Page encryption bitmap by extending 
> kvm_arch_commit_memory_region() callack to add svm specific x86_ops 
> which can read the userspace provided memory region/memslots and 
> calculate the amount of guest RAM managed by the KVM and grow the bitmap 
> based on that information, i.e. the highest guest PA that is mapped by a 
> memslot.

Hi Ashish,

the commit message should explain why this is needed or useful.

Paolo


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s).
  2020-05-05 21:22 ` [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s) Ashish Kalra
  2020-05-30  2:09   ` Steve Rutherford
@ 2020-12-04 11:11   ` Paolo Bonzini
  2020-12-04 11:22   ` Paolo Bonzini
  2 siblings, 0 replies; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-04 11:11 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On 05/05/20 23:22, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> For source VM, live migration feature is enabled explicitly
> when the guest is booting, for the incoming VM(s) it is implied.
> This is required for handling A->B->C->... VM migrations case.
> 
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>   arch/x86/kvm/svm/sev.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 6f69c3a47583..ba7c0ebfa1f3 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1592,6 +1592,13 @@ int svm_set_page_enc_bitmap(struct kvm *kvm,
>   	if (ret)
>   		goto unlock;
>   
> +	/*
> +	 * For source VM, live migration feature is enabled
> +	 * explicitly when the guest is booting, for the
> +	 * incoming VM(s) it is implied.
> +	 */
> +	sev_update_migration_flags(kvm, KVM_SEV_LIVE_MIGRATION_ENABLED);
> +
>   	bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
>   		    (gfn_end - gfn_start));

Why?  I'd prefer the host to do this manually using a KVM_ENABLE_CAP. 
The hook in patch 12 would also be enabled/disabled using KVM_ENABLE_CAP.

Paolo


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-05-05 21:19 ` [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
  2020-05-30  2:07   ` Steve Rutherford
@ 2020-12-04 11:20   ` Paolo Bonzini
  2020-12-04 16:48     ` Sean Christopherson
  2020-12-04 21:42     ` Ashish Kalra
  1 sibling, 2 replies; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-04 11:20 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On 05/05/20 23:19, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> for host-side support for SEV live migration. Also add a new custom
> MSR_KVM_SEV_LIVE_MIG_EN for guest to enable the SEV live migration
> feature.
> 
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>   Documentation/virt/kvm/cpuid.rst     |  5 +++++
>   Documentation/virt/kvm/msr.rst       | 10 ++++++++++
>   arch/x86/include/uapi/asm/kvm_para.h |  5 +++++
>   arch/x86/kvm/svm/sev.c               | 14 ++++++++++++++
>   arch/x86/kvm/svm/svm.c               | 16 ++++++++++++++++
>   arch/x86/kvm/svm/svm.h               |  2 ++
>   6 files changed, 52 insertions(+)
> 
> diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
> index 01b081f6e7ea..0514523e00cd 100644
> --- a/Documentation/virt/kvm/cpuid.rst
> +++ b/Documentation/virt/kvm/cpuid.rst
> @@ -86,6 +86,11 @@ KVM_FEATURE_PV_SCHED_YIELD        13          guest checks this feature bit
>                                                 before using paravirtualized
>                                                 sched yield.
>   
> +KVM_FEATURE_SEV_LIVE_MIGRATION    14          guest checks this feature bit before
> +                                              using the page encryption state
> +                                              hypercall to notify the page state
> +                                              change
> +
>   KVM_FEATURE_CLOCSOURCE_STABLE_BIT 24          host will warn if no guest-side
>                                                 per-cpu warps are expeced in
>                                                 kvmclock
> diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
> index 33892036672d..7cd7786bbb03 100644
> --- a/Documentation/virt/kvm/msr.rst
> +++ b/Documentation/virt/kvm/msr.rst
> @@ -319,3 +319,13 @@ data:
>   
>   	KVM guests can request the host not to poll on HLT, for example if
>   	they are performing polling themselves.
> +
> +MSR_KVM_SEV_LIVE_MIG_EN:
> +        0x4b564d06
> +
> +	Control SEV Live Migration features.
> +
> +data:
> +        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature.
> +        Bit 1 enables (1) or disables (0) support for SEV Live Migration extensions.
> +        All other bits are reserved.

This doesn't say what the feature is or does, and what the extensions 
are.  As far as I understand bit 0 is a guest->host communication that 
it's properly handling the encryption bitmap.

I applied patches -13, this one a bit changed as follows.

diff --git a/Documentation/virt/kvm/cpuid.rst 
b/Documentation/virt/kvm/cpuid.rst
index cf62162d4be2..7d82d7da3835 100644
--- a/Documentation/virt/kvm/cpuid.rst
+++ b/Documentation/virt/kvm/cpuid.rst
@@ -96,6 +96,11 @@ KVM_FEATURE_MSI_EXT_DEST_ID        15          guest 
checks this feature bit
                                                 before using extended 
destination
                                                 ID bits in MSI address 
bits 11-5.

+KVM_FEATURE_ENCRYPTED_VM_BIT       16          guest checks this 
feature bit before
+                                               using the page 
encryption state
+                                               hypercall and encrypted VM
+                                               features MSR
+
  KVM_FEATURE_CLOCKSOURCE_STABLE_BIT 24          host will warn if no 
guest-side
                                                 per-cpu warps are 
expected in
                                                 kvmclock
diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
index e37a14c323d2..02528bc760b8 100644
--- a/Documentation/virt/kvm/msr.rst
+++ b/Documentation/virt/kvm/msr.rst
@@ -376,3 +376,13 @@ data:
  	write '1' to bit 0 of the MSR, this causes the host to re-scan its queue
  	and check if there are more notifications pending. The MSR is available
  	if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.
+
+MSR_KVM_ENC_VM_FEATURE:
+        0x4b564d08
+
+	Control encrypted VM features.
+
+data:
+        Bit 0 tells the host that the guest is (1) or is not (0) 
issuing the
+        ``KVM_HC_PAGE_ENC_STATUS`` hypercall to keep the encrypted bitmap
+       up to date.
diff --git a/arch/x86/include/uapi/asm/kvm_para.h 
b/arch/x86/include/uapi/asm/kvm_para.h
index 950afebfba88..3dda6e416a70 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -33,6 +33,7 @@
  #define KVM_FEATURE_PV_SCHED_YIELD	13
  #define KVM_FEATURE_ASYNC_PF_INT	14
  #define KVM_FEATURE_MSI_EXT_DEST_ID	15
+#define KVM_FEATURE_ENCRYPTED_VM	16

  #define KVM_HINTS_REALTIME      0

@@ -54,6 +55,7 @@
  #define MSR_KVM_POLL_CONTROL	0x4b564d05
  #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
  #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
+#define MSR_KVM_ENC_VM_FEATURE	0x4b564d08

  struct kvm_steal_time {
  	__u64 steal;
@@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
  #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
  #define KVM_PV_EOI_DISABLED 0x0

+#define KVM_ENC_VM_BITMAP_VALID			(1 << 0)
+
  #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index fa67f498e838..0673531233da 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1478,6 +1478,17 @@ int svm_page_enc_status_hc(struct kvm *kvm, 
unsigned long gpa,
  	return 0;
  }

+void sev_update_enc_vm_flags(struct kvm *kvm, u64 data)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+
+	if (!sev_guest(kvm))
+		return;
+
+	if (data & KVM_ENC_VM_BITMAP_VALID)
+		sev->live_migration_enabled = true;
+}
+
  int svm_get_page_enc_bitmap(struct kvm *kvm,
  				   struct kvm_page_enc_bitmap *bmap)
  {
@@ -1490,6 +1501,9 @@ int svm_get_page_enc_bitmap(struct kvm *kvm,
  	if (!sev_guest(kvm))
  		return -ENOTTY;

+	if (!sev->live_migration_enabled)
+		return -EINVAL;
+
  	gfn_start = bmap->start_gfn;
  	gfn_end = gfn_start + bmap->num_pages;

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 66f7014eaae2..8ac2c5b9c675 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2766,6 +2766,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, 
struct msr_data *msr)
  		svm->msr_decfg = data;
  		break;
  	}
+	case MSR_KVM_ENC_VM_FEATURE:
+		sev_update_enc_vm_flags(vcpu->kvm, data);
+		break;
  	case MSR_IA32_APICBASE:
  		if (kvm_vcpu_apicv_active(vcpu))
  			avic_update_vapic_bar(to_svm(vcpu), data);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 287559b8c5b2..363c3f8d00b7 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -66,6 +66,7 @@ struct kvm_sev_info {
  	int fd;			/* SEV device fd */
  	unsigned long pages_locked; /* Number of pages locked */
  	struct list_head regions_list;  /* List of registered regions */
+	bool live_migration_enabled;
  	unsigned long *page_enc_bmap;
  	unsigned long page_enc_bmap_size;
  };
@@ -504,5 +505,6 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned 
long gpa,
  				  unsigned long npages, unsigned long enc);
  int svm_get_page_enc_bitmap(struct kvm *kvm, struct 
kvm_page_enc_bitmap *bmap);
  int svm_set_page_enc_bitmap(struct kvm *kvm, struct 
kvm_page_enc_bitmap *bmap);
+void sev_update_enc_vm_flags(struct kvm *kvm, u64 data);

  #endif

Paolo


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s).
  2020-05-05 21:22 ` [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s) Ashish Kalra
  2020-05-30  2:09   ` Steve Rutherford
  2020-12-04 11:11   ` Paolo Bonzini
@ 2020-12-04 11:22   ` Paolo Bonzini
  2020-12-04 21:46     ` Ashish Kalra
  2 siblings, 1 reply; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-04 11:22 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On 05/05/20 23:22, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> For source VM, live migration feature is enabled explicitly
> when the guest is booting, for the incoming VM(s) it is implied.
> This is required for handling A->B->C->... VM migrations case.
> 
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>   arch/x86/kvm/svm/sev.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 6f69c3a47583..ba7c0ebfa1f3 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1592,6 +1592,13 @@ int svm_set_page_enc_bitmap(struct kvm *kvm,
>   	if (ret)
>   		goto unlock;
>   
> +	/*
> +	 * For source VM, live migration feature is enabled
> +	 * explicitly when the guest is booting, for the
> +	 * incoming VM(s) it is implied.
> +	 */
> +	sev_update_migration_flags(kvm, KVM_SEV_LIVE_MIGRATION_ENABLED);
> +
>   	bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
>   		    (gfn_end - gfn_start));
>   
> 

I would prefer that userspace does this using KVM_SET_MSR instead.

Paolo


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 11:20   ` Paolo Bonzini
@ 2020-12-04 16:48     ` Sean Christopherson
  2020-12-04 17:08       ` Ashish Kalra
  2020-12-04 18:06       ` Ashish Kalra
  2020-12-04 21:42     ` Ashish Kalra
  1 sibling, 2 replies; 59+ messages in thread
From: Sean Christopherson @ 2020-12-04 16:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ashish Kalra, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86,
	kvm, linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On Fri, Dec 04, 2020, Paolo Bonzini wrote:
> I applied patches -13, this one a bit changed as follows.

Can we hold up on applying this series?  Unless I'm misunderstanding things,
much of what you're applying is superseded by a much more recent series to add
only the page encryption bitmap[*].  I have several concerns/comments for that
series that I would like to hash out before we add a new ioctl().  I'll try to
respond next week, my time is unfortunately limited due to onboarding activities.

[*] https://lkml.kernel.org/r/cover.1606782580.git.ashish.kalra@amd.com

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 16:48     ` Sean Christopherson
@ 2020-12-04 17:08       ` Ashish Kalra
  2020-12-04 17:23         ` Sean Christopherson
  2020-12-04 18:06       ` Ashish Kalra
  1 sibling, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-12-04 17:08 UTC (permalink / raw)
  To: seanjc
  Cc: Ashish.Kalra, Thomas.Lendacky, bp, brijesh.singh, hpa, joro, kvm,
	linux-kernel, mingo, pbonzini, rientjes, srutherford, tglx,
	venu.busireddy, x86

An immediate response, actually the SEV live migration patches are preferred over the Page encryption bitmap
patches, in other words, if SEV live migration patches are applied then we don't need the Page encryption bitmap
patches and we prefer the live migration series to be applied.

It is not that page encryption bitmap series supersede the live migration patches, they are just cut of the
live migration patches. 

Thanks,
Ashish

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 17:08       ` Ashish Kalra
@ 2020-12-04 17:23         ` Sean Christopherson
  2020-12-06 10:57           ` Paolo Bonzini
  0 siblings, 1 reply; 59+ messages in thread
From: Sean Christopherson @ 2020-12-04 17:23 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Thomas.Lendacky, bp, brijesh.singh, hpa, joro, kvm, linux-kernel,
	mingo, pbonzini, rientjes, srutherford, tglx, venu.busireddy,
	x86

On Fri, Dec 04, 2020, Ashish Kalra wrote:
> An immediate response, actually the SEV live migration patches are preferred
> over the Page encryption bitmap patches, in other words, if SEV live
> migration patches are applied then we don't need the Page encryption bitmap
> patches and we prefer the live migration series to be applied.
> 
> It is not that page encryption bitmap series supersede the live migration
> patches, they are just cut of the live migration patches. 

In that case, can you post a fresh version of the live migration series?  Paolo
is obviously willing to take a big chunk of that series, and it will likely be
easier to review with the full context, e.g. one of my comments on the standalone
encryption bitmap series was going to be that it's hard to review without seeing
the live migration aspect.

Thanks!

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 16:48     ` Sean Christopherson
  2020-12-04 17:08       ` Ashish Kalra
@ 2020-12-04 18:06       ` Ashish Kalra
  2020-12-04 18:41         ` Sean Christopherson
  1 sibling, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-12-04 18:06 UTC (permalink / raw)
  To: seanjc
  Cc: Ashish.Kalra, Thomas.Lendacky, bp, brijesh.singh, hpa, joro, kvm,
	linux-kernel, mingo, pbonzini, rientjes, srutherford, tglx,
	venu.busireddy, x86

Yes i will post a fresh version of the live migration patches. 

Also, can you please check your email settings, we are only able to see your response on the
mailing list but we are not getting your direct responses.

Thanks,
Ashish

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 18:06       ` Ashish Kalra
@ 2020-12-04 18:41         ` Sean Christopherson
  2020-12-04 18:48           ` Kalra, Ashish
  2020-12-04 19:02           ` Tom Lendacky
  0 siblings, 2 replies; 59+ messages in thread
From: Sean Christopherson @ 2020-12-04 18:41 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Tom Lendacky, bp, Brijesh Singh, H. Peter Anvin, Joerg Roedel,
	kvm list, LKML, Ingo Molnar, Paolo Bonzini, David Rientjes,
	Steve Rutherford, Thomas Gleixner, venu.busireddy, X86 ML

On Fri, Dec 4, 2020 at 10:07 AM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> Yes i will post a fresh version of the live migration patches.
>
> Also, can you please check your email settings, we are only able to see your response on the
> mailing list but we are not getting your direct responses.

Hrm, as in you don't get the email?

Is this email any different?  Sending via gmail instead of mutt...

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 18:41         ` Sean Christopherson
@ 2020-12-04 18:48           ` Kalra, Ashish
  2020-12-04 19:02           ` Tom Lendacky
  1 sibling, 0 replies; 59+ messages in thread
From: Kalra, Ashish @ 2020-12-04 18:48 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Lendacky, Thomas, bp, Singh, Brijesh, H. Peter Anvin,
	Joerg Roedel, kvm list, LKML, Ingo Molnar, Paolo Bonzini,
	David Rientjes, Steve Rutherford, Thomas Gleixner,
	venu.busireddy, X86 ML

This time I received your email directly.

Thanks,
Ashish

> On Dec 4, 2020, at 12:41 PM, Sean Christopherson <seanjc@google.com> wrote:
> 
> On Fri, Dec 4, 2020 at 10:07 AM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>> 
>> Yes i will post a fresh version of the live migration patches.
>> 
>> Also, can you please check your email settings, we are only able to see your response on the
>> mailing list but we are not getting your direct responses.
> 
> Hrm, as in you don't get the email?
> 
> Is this email any different?  Sending via gmail instead of mutt...

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 18:41         ` Sean Christopherson
  2020-12-04 18:48           ` Kalra, Ashish
@ 2020-12-04 19:02           ` Tom Lendacky
  1 sibling, 0 replies; 59+ messages in thread
From: Tom Lendacky @ 2020-12-04 19:02 UTC (permalink / raw)
  To: Sean Christopherson, Ashish Kalra
  Cc: bp, Brijesh Singh, H. Peter Anvin, Joerg Roedel, kvm list, LKML,
	Ingo Molnar, Paolo Bonzini, David Rientjes, Steve Rutherford,
	Thomas Gleixner, venu.busireddy, X86 ML

On 12/4/20 12:41 PM, Sean Christopherson wrote:
> On Fri, Dec 4, 2020 at 10:07 AM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>>
>> Yes i will post a fresh version of the live migration patches.
>>
>> Also, can you please check your email settings, we are only able to see your response on the
>> mailing list but we are not getting your direct responses.
> 
> Hrm, as in you don't get the email?
> 
> Is this email any different?  Sending via gmail instead of mutt...

FWIW, I received the previous email(s). It's probably something on our end.

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap.
  2020-12-04 11:08   ` Paolo Bonzini
@ 2020-12-04 21:38     ` Ashish Kalra
  2020-12-06 10:19       ` Paolo Bonzini
  0 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-12-04 21:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

Hello Paolo,

On Fri, Dec 04, 2020 at 12:08:20PM +0100, Paolo Bonzini wrote:
> On 05/05/20 23:18, Ashish Kalra wrote:
> > Add support for static allocation of the unified Page encryption bitmap
> > by extending kvm_arch_commit_memory_region() callack to add svm specific
> > x86_ops which can read the userspace provided memory region/memslots and
> > calculate the amount of guest RAM managed by the KVM and grow the bitmap
> > based on that information, i.e. the highest guest PA that is mapped by a
> > memslot.
> 
> Hi Ashish,
> 
> the commit message should explain why this is needed or useful.
> 

Earlier we used to dynamic resizing of the page encryption bitmap based
on the guest hypercall, but potentially a malicious guest can make a hypercall
which can trigger a really large memory allocation on the host side and may
eventually cause denial of service.

Hence now we don't do dynamic resizing of the page encryption bitmap as
per the hypercall and allocate it statically based on guest memory 
allocation by walking through memslots and computing it's size.

I will add the above comment to the fresh series of the patch-set i am
going to post. 

Thanks,
Ashish

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 11:20   ` Paolo Bonzini
  2020-12-04 16:48     ` Sean Christopherson
@ 2020-12-04 21:42     ` Ashish Kalra
  1 sibling, 0 replies; 59+ messages in thread
From: Ashish Kalra @ 2020-12-04 21:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

Hello Paolo,

On Fri, Dec 04, 2020 at 12:20:46PM +0100, Paolo Bonzini wrote:
> On 05/05/20 23:19, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@amd.com>
> > 
> > Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> > for host-side support for SEV live migration. Also add a new custom
> > MSR_KVM_SEV_LIVE_MIG_EN for guest to enable the SEV live migration
> > feature.
> > 
> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> > ---
> >   Documentation/virt/kvm/cpuid.rst     |  5 +++++
> >   Documentation/virt/kvm/msr.rst       | 10 ++++++++++
> >   arch/x86/include/uapi/asm/kvm_para.h |  5 +++++
> >   arch/x86/kvm/svm/sev.c               | 14 ++++++++++++++
> >   arch/x86/kvm/svm/svm.c               | 16 ++++++++++++++++
> >   arch/x86/kvm/svm/svm.h               |  2 ++
> >   6 files changed, 52 insertions(+)
> > 
> > diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
> > index 01b081f6e7ea..0514523e00cd 100644
> > --- a/Documentation/virt/kvm/cpuid.rst
> > +++ b/Documentation/virt/kvm/cpuid.rst
> > @@ -86,6 +86,11 @@ KVM_FEATURE_PV_SCHED_YIELD        13          guest checks this feature bit
> >                                                 before using paravirtualized
> >                                                 sched yield.
> > +KVM_FEATURE_SEV_LIVE_MIGRATION    14          guest checks this feature bit before
> > +                                              using the page encryption state
> > +                                              hypercall to notify the page state
> > +                                              change
> > +
> >   KVM_FEATURE_CLOCSOURCE_STABLE_BIT 24          host will warn if no guest-side
> >                                                 per-cpu warps are expeced in
> >                                                 kvmclock
> > diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
> > index 33892036672d..7cd7786bbb03 100644
> > --- a/Documentation/virt/kvm/msr.rst
> > +++ b/Documentation/virt/kvm/msr.rst
> > @@ -319,3 +319,13 @@ data:
> >   	KVM guests can request the host not to poll on HLT, for example if
> >   	they are performing polling themselves.
> > +
> > +MSR_KVM_SEV_LIVE_MIG_EN:
> > +        0x4b564d06
> > +
> > +	Control SEV Live Migration features.
> > +
> > +data:
> > +        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature.
> > +        Bit 1 enables (1) or disables (0) support for SEV Live Migration extensions.
> > +        All other bits are reserved.
> 
> This doesn't say what the feature is or does, and what the extensions are.
> As far as I understand bit 0 is a guest->host communication that it's
> properly handling the encryption bitmap.
> 
Yes, your understanding for bit 0 is correct, the extensions are for any
future extensions related to this live migration support, such as
extensions/support for accelerated migration, etc. 

> I applied patches -13, this one a bit changed as follows.

Yes, i will post a fresh series of this patch-set.

Thanks,
Ashish

> 
> diff --git a/Documentation/virt/kvm/cpuid.rst
> b/Documentation/virt/kvm/cpuid.rst
> index cf62162d4be2..7d82d7da3835 100644
> --- a/Documentation/virt/kvm/cpuid.rst
> +++ b/Documentation/virt/kvm/cpuid.rst
> @@ -96,6 +96,11 @@ KVM_FEATURE_MSI_EXT_DEST_ID        15          guest
> checks this feature bit
>                                                 before using extended
> destination
>                                                 ID bits in MSI address bits
> 11-5.
> 
> +KVM_FEATURE_ENCRYPTED_VM_BIT       16          guest checks this feature
> bit before
> +                                               using the page encryption
> state
> +                                               hypercall and encrypted VM
> +                                               features MSR
> +
>  KVM_FEATURE_CLOCKSOURCE_STABLE_BIT 24          host will warn if no
> guest-side
>                                                 per-cpu warps are expected
> in
>                                                 kvmclock
> diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
> index e37a14c323d2..02528bc760b8 100644
> --- a/Documentation/virt/kvm/msr.rst
> +++ b/Documentation/virt/kvm/msr.rst
> @@ -376,3 +376,13 @@ data:
>  	write '1' to bit 0 of the MSR, this causes the host to re-scan its queue
>  	and check if there are more notifications pending. The MSR is available
>  	if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.
> +
> +MSR_KVM_ENC_VM_FEATURE:
> +        0x4b564d08
> +
> +	Control encrypted VM features.
> +
> +data:
> +        Bit 0 tells the host that the guest is (1) or is not (0) issuing
> the
> +        ``KVM_HC_PAGE_ENC_STATUS`` hypercall to keep the encrypted bitmap
> +       up to date.
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h
> b/arch/x86/include/uapi/asm/kvm_para.h
> index 950afebfba88..3dda6e416a70 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -33,6 +33,7 @@
>  #define KVM_FEATURE_PV_SCHED_YIELD	13
>  #define KVM_FEATURE_ASYNC_PF_INT	14
>  #define KVM_FEATURE_MSI_EXT_DEST_ID	15
> +#define KVM_FEATURE_ENCRYPTED_VM	16
> 
>  #define KVM_HINTS_REALTIME      0
> 
> @@ -54,6 +55,7 @@
>  #define MSR_KVM_POLL_CONTROL	0x4b564d05
>  #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
>  #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
> +#define MSR_KVM_ENC_VM_FEATURE	0x4b564d08
> 
>  struct kvm_steal_time {
>  	__u64 steal;
> @@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
>  #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
>  #define KVM_PV_EOI_DISABLED 0x0
> 
> +#define KVM_ENC_VM_BITMAP_VALID			(1 << 0)
> +
>  #endif /* _UAPI_ASM_X86_KVM_PARA_H */
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index fa67f498e838..0673531233da 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1478,6 +1478,17 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned
> long gpa,
>  	return 0;
>  }
> 
> +void sev_update_enc_vm_flags(struct kvm *kvm, u64 data)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +
> +	if (!sev_guest(kvm))
> +		return;
> +
> +	if (data & KVM_ENC_VM_BITMAP_VALID)
> +		sev->live_migration_enabled = true;
> +}
> +
>  int svm_get_page_enc_bitmap(struct kvm *kvm,
>  				   struct kvm_page_enc_bitmap *bmap)
>  {
> @@ -1490,6 +1501,9 @@ int svm_get_page_enc_bitmap(struct kvm *kvm,
>  	if (!sev_guest(kvm))
>  		return -ENOTTY;
> 
> +	if (!sev->live_migration_enabled)
> +		return -EINVAL;
> +
>  	gfn_start = bmap->start_gfn;
>  	gfn_end = gfn_start + bmap->num_pages;
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 66f7014eaae2..8ac2c5b9c675 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2766,6 +2766,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct
> msr_data *msr)
>  		svm->msr_decfg = data;
>  		break;
>  	}
> +	case MSR_KVM_ENC_VM_FEATURE:
> +		sev_update_enc_vm_flags(vcpu->kvm, data);
> +		break;
>  	case MSR_IA32_APICBASE:
>  		if (kvm_vcpu_apicv_active(vcpu))
>  			avic_update_vapic_bar(to_svm(vcpu), data);
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 287559b8c5b2..363c3f8d00b7 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -66,6 +66,7 @@ struct kvm_sev_info {
>  	int fd;			/* SEV device fd */
>  	unsigned long pages_locked; /* Number of pages locked */
>  	struct list_head regions_list;  /* List of registered regions */
> +	bool live_migration_enabled;
>  	unsigned long *page_enc_bmap;
>  	unsigned long page_enc_bmap_size;
>  };
> @@ -504,5 +505,6 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned
> long gpa,
>  				  unsigned long npages, unsigned long enc);
>  int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap
> *bmap);
>  int svm_set_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap
> *bmap);
> +void sev_update_enc_vm_flags(struct kvm *kvm, u64 data);
> 
>  #endif
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s).
  2020-12-04 11:22   ` Paolo Bonzini
@ 2020-12-04 21:46     ` Ashish Kalra
  2020-12-06 10:18       ` Paolo Bonzini
  0 siblings, 1 reply; 59+ messages in thread
From: Ashish Kalra @ 2020-12-04 21:46 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

Hello Paolo,

On Fri, Dec 04, 2020 at 12:22:48PM +0100, Paolo Bonzini wrote:
> On 05/05/20 23:22, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@amd.com>
> > 
> > For source VM, live migration feature is enabled explicitly
> > when the guest is booting, for the incoming VM(s) it is implied.
> > This is required for handling A->B->C->... VM migrations case.
> > 
> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> > ---
> >   arch/x86/kvm/svm/sev.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index 6f69c3a47583..ba7c0ebfa1f3 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -1592,6 +1592,13 @@ int svm_set_page_enc_bitmap(struct kvm *kvm,
> >   	if (ret)
> >   		goto unlock;
> > +	/*
> > +	 * For source VM, live migration feature is enabled
> > +	 * explicitly when the guest is booting, for the
> > +	 * incoming VM(s) it is implied.
> > +	 */
> > +	sev_update_migration_flags(kvm, KVM_SEV_LIVE_MIGRATION_ENABLED);
> > +
> >   	bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
> >   		    (gfn_end - gfn_start));
> > 
> 
> I would prefer that userspace does this using KVM_SET_MSR instead.
> 
> 

Ok.

But, this is for a VM which has already been migrated based on feature
support on host and guest and host negotation and enablement of the live
migration support, so i am assuming that a VM which has already been
migrated can have this support enabled implicitly for further migration.

Thanks,
Ashish

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s).
  2020-12-04 21:46     ` Ashish Kalra
@ 2020-12-06 10:18       ` Paolo Bonzini
  0 siblings, 0 replies; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-06 10:18 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On 04/12/20 22:46, Ashish Kalra wrote:
>> I would prefer that userspace does this using KVM_SET_MSR instead.
>
> Ok.
> 
> But, this is for a VM which has already been migrated based on feature
> support on host and guest and host negotation and enablement of the live
> migration support, so i am assuming that a VM which has already been
> migrated can have this support enabled implicitly for further migration.

It's just that it is a unexpected side effect of 
KVM_SET_PAGE_ENC_BITMAP.  I prefer to have it tied to the more obvious 
KVM_SET_MSR ioctl.

Paolo


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap.
  2020-12-04 21:38     ` Ashish Kalra
@ 2020-12-06 10:19       ` Paolo Bonzini
  0 siblings, 0 replies; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-06 10:19 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, rientjes, venu.busireddy,
	brijesh.singh

On 04/12/20 22:38, Ashish Kalra wrote:
> Earlier we used to dynamic resizing of the page encryption bitmap based
> on the guest hypercall, but potentially a malicious guest can make a hypercall
> which can trigger a really large memory allocation on the host side and may
> eventually cause denial of service.
> 
> Hence now we don't do dynamic resizing of the page encryption bitmap as
> per the hypercall and allocate it statically based on guest memory
> allocation by walking through memslots and computing it's size.
> 
> I will add the above comment to the fresh series of the patch-set i am
> going to post.

Sounds good, thanks.  If there are no other changes I can include this 
in the commit message myself.

Paolo


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-04 17:23         ` Sean Christopherson
@ 2020-12-06 10:57           ` Paolo Bonzini
  2020-12-06 14:09             ` Kalra, Ashish
  0 siblings, 1 reply; 59+ messages in thread
From: Paolo Bonzini @ 2020-12-06 10:57 UTC (permalink / raw)
  To: Sean Christopherson, Ashish Kalra
  Cc: Thomas.Lendacky, bp, brijesh.singh, hpa, joro, kvm, linux-kernel,
	mingo, rientjes, srutherford, tglx, venu.busireddy, x86

On 04/12/20 18:23, Sean Christopherson wrote:
> On Fri, Dec 04, 2020, Ashish Kalra wrote:
>> An immediate response, actually the SEV live migration patches are preferred
>> over the Page encryption bitmap patches, in other words, if SEV live
>> migration patches are applied then we don't need the Page encryption bitmap
>> patches and we prefer the live migration series to be applied.
>>
>> It is not that page encryption bitmap series supersede the live migration
>> patches, they are just cut of the live migration patches.
> In that case, can you post a fresh version of the live migration series?  Paolo
> is obviously willing to take a big chunk of that series, and it will likely be
> easier to review with the full context, e.g. one of my comments on the standalone
> encryption bitmap series was going to be that it's hard to review without seeing
> the live migration aspect.

It still applies without change.  For now I'll only keep the series 
queued in my (n)SVM branch, but will hold on applying it to kvm.git's 
queue and next branches.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2020-12-06 10:57           ` Paolo Bonzini
@ 2020-12-06 14:09             ` Kalra, Ashish
  0 siblings, 0 replies; 59+ messages in thread
From: Kalra, Ashish @ 2020-12-06 14:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Lendacky, Thomas, bp, Singh, Brijesh, hpa,
	joro, kvm, linux-kernel, mingo, rientjes, srutherford, tglx,
	venu.busireddy, x86


> On Dec 6, 2020, at 4:58 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
> On 04/12/20 18:23, Sean Christopherson wrote:
>>> On Fri, Dec 04, 2020, Ashish Kalra wrote:
>>> An immediate response, actually the SEV live migration patches are preferred
>>> over the Page encryption bitmap patches, in other words, if SEV live
>>> migration patches are applied then we don't need the Page encryption bitmap
>>> patches and we prefer the live migration series to be applied.
>>> 
>>> It is not that page encryption bitmap series supersede the live migration
>>> patches, they are just cut of the live migration patches.
>> In that case, can you post a fresh version of the live migration series?  Paolo
>> is obviously willing to take a big chunk of that series, and it will likely be
>> easier to review with the full context, e.g. one of my comments on the standalone
>> encryption bitmap series was going to be that it's hard to review without seeing
>> the live migration aspect.
> 
> It still applies without change.  For now I'll only keep the series queued in my (n)SVM branch, but will hold on applying it to kvm.git's queue and next branches.
> 

Ok thanks Paolo.

^ permalink raw reply	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2020-12-06 14:10 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
2020-05-05 21:14 ` [PATCH v8 01/18] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
2020-05-05 21:14 ` [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
2020-05-05 22:48   ` Venu Busireddy
2020-05-05 21:15 ` [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
2020-05-05 22:51   ` Venu Busireddy
2020-05-05 21:15 ` [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
2020-05-05 22:52   ` Venu Busireddy
2020-05-05 21:15 ` [PATCH v8 05/18] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
2020-05-05 21:16 ` [PATCH v8 06/18] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
2020-05-05 21:16 ` [PATCH v8 07/18] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2020-05-05 21:17 ` [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
2020-05-30  2:05   ` Steve Rutherford
2020-05-05 21:17 ` [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl Ashish Kalra
2020-05-30  2:05   ` Steve Rutherford
2020-05-05 21:17 ` [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2020-05-30  2:06   ` Steve Rutherford
2020-05-05 21:18 ` [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl Ashish Kalra
2020-05-30  2:06   ` Steve Rutherford
2020-05-05 21:18 ` [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap Ashish Kalra
2020-05-30  2:07   ` Steve Rutherford
2020-05-30  5:49     ` Ashish Kalra
2020-12-04 11:08   ` Paolo Bonzini
2020-12-04 21:38     ` Ashish Kalra
2020-12-06 10:19       ` Paolo Bonzini
2020-05-05 21:19 ` [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
2020-05-30  2:07   ` Steve Rutherford
2020-12-04 11:20   ` Paolo Bonzini
2020-12-04 16:48     ` Sean Christopherson
2020-12-04 17:08       ` Ashish Kalra
2020-12-04 17:23         ` Sean Christopherson
2020-12-06 10:57           ` Paolo Bonzini
2020-12-06 14:09             ` Kalra, Ashish
2020-12-04 18:06       ` Ashish Kalra
2020-12-04 18:41         ` Sean Christopherson
2020-12-04 18:48           ` Kalra, Ashish
2020-12-04 19:02           ` Tom Lendacky
2020-12-04 21:42     ` Ashish Kalra
2020-05-05 21:20 ` [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
2020-05-30  2:07   ` Steve Rutherford
2020-05-30  5:51     ` Ashish Kalra
2020-05-05 21:20 ` [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
2020-05-30  2:08   ` Steve Rutherford
2020-05-05 21:20 ` [PATCH v8 16/18] KVM: x86: Mark _bss_decrypted section variables as decrypted in page encryption bitmap Ashish Kalra
2020-05-30  2:08   ` Steve Rutherford
2020-05-05 21:21 ` [PATCH v8 17/18] KVM: x86: Add kexec support for SEV Live Migration Ashish Kalra
2020-05-05 21:21   ` Ashish Kalra
2020-05-30  2:08   ` Steve Rutherford
2020-05-30  2:08     ` Steve Rutherford
2020-05-05 21:22 ` [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s) Ashish Kalra
2020-05-30  2:09   ` Steve Rutherford
2020-12-04 11:11   ` Paolo Bonzini
2020-12-04 11:22   ` Paolo Bonzini
2020-12-04 21:46     ` Ashish Kalra
2020-12-06 10:18       ` Paolo Bonzini
2020-05-18 19:07 ` [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
2020-06-01 20:02   ` Steve Rutherford
2020-06-03 22:14     ` Ashish Kalra
2020-08-05 18:29       ` Steve Rutherford

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.