kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v13 00/12] Add AMD SEV guest live migration support
@ 2021-04-15 15:52 Ashish Kalra
  2021-04-15 15:53 ` [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
                   ` (13 more replies)
  0 siblings, 14 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:52 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

The series add support for AMD SEV guest live migration commands. To protect the
confidentiality of an SEV protected guest memory while in transit we need to
use the SEV commands defined in SEV API spec [1].

SEV guest VMs have the concept of private and shared memory. Private memory
is encrypted with the guest-specific key, while shared memory may be encrypted
with hypervisor key. The commands provided by the SEV FW are meant to be used
for the private memory only. The patch series introduces a new hypercall.
The guest OS can use this hypercall to notify the page encryption status.
If the page is encrypted with guest specific-key then we use SEV command during
the migration. If page is not encrypted then fallback to default.

The patch uses the KVM_EXIT_HYPERCALL exitcode and hypercall to
userspace exit functionality as a common interface from the guest back to the
VMM and passing on the guest shared/unencrypted page information to the
userspace VMM/Qemu. Qemu can consult this information during migration to know 
whether the page is encrypted.

This section descibes how the SEV live migration feature is negotiated
between the host and guest, the host indicates this feature support via 
KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
sets a UEFI enviroment variable indicating OVMF support for live
migration, the guest kernel also detects the host support for this
feature via cpuid and in case of an EFI boot verifies if OVMF also
supports this feature by getting the UEFI enviroment variable and if it
set then enables live migration feature on host by writing to a custom
MSR, if not booted under EFI, then it simply enables the feature by
again writing to the custom MSR. The MSR is also handled by the
userspace VMM/Qemu.

A branch containing these patches is available here:
https://github.com/AMDESE/linux/tree/sev-migration-v13

[1] https://developer.amd.com/wp-content/resources/55766.PDF

Changes since v12:
- Reset page encryption status during early boot instead of just 
  before the kexec to avoid SMP races during kvm_pv_guest_cpu_reboot().
- Remove incorrect log message in case of non-EFI boot and implicit
  enabling of SEV live migration feature.

Changes since v11:
- Clean up and remove kvm_x86_ops callback for page_enc_status_hc and
  instead add a new per-VM flag to support/enable the page encryption
  status hypercall.
- Remove KVM_EXIT_DMA_SHARE/KVM_EXIT_DMA_UNSHARE exitcodes and instead
  use the KVM_EXIT_HYPERCALL exitcode for page encryption status
  hypercall to userspace functionality. 

Changes since v10:
- Adds new KVM_EXIT_DMA_SHARE/KVM_EXIT_DMA_UNSHARE hypercall to
  userspace exit functionality as a common interface from the guest back to the
  KVM and passing on the guest shared/unencrypted region information to the
  userspace VMM/Qemu. KVM/host kernel does not maintain the guest shared
  memory regions information anymore. 
- Remove implicit enabling of SEV live migration feature for an SEV
  guest, now this is explicitly in control of the userspace VMM/Qemu.
- Custom MSR handling is also now moved into userspace VMM/Qemu.
- As KVM does not maintain the guest shared memory region information
  anymore, sev_dbg_crypt() cannot bypass unencrypted guest memory
  regions without support from userspace VMM/Qemu.

Changes since v9:
- Transitioning from page encryption bitmap to the shared pages list
  to keep track of guest's shared/unencrypted memory regions.
- Move back to marking the complete _bss_decrypted section as 
  decrypted in the shared pages list.
- Invoke a new function check_kvm_sev_migration() via kvm_init_platform()
  for guest to query for host-side support for SEV live migration 
  and to enable the SEV live migration feature, to avoid
  #ifdefs in code 
- Rename MSR_KVM_SEV_LIVE_MIG_EN to MSR_KVM_SEV_LIVE_MIGRATION.
- Invoke a new function handle_unencrypted_region() from 
  sev_dbg_crypt() to bypass unencrypted guest memory regions.

Changes since v8:
- Rebasing to kvm next branch.
- Fixed and added comments as per review feedback on v8 patches.
- Removed implicitly enabling live migration for incoming VMs in
  in KVM_SET_PAGE_ENC_BITMAP, it is now done via KVM_SET_MSR ioctl.
- Adds support for bypassing unencrypted guest memory regions for
  DBG_DECRYPT API calls, guest memory region encryption status in
  sev_dbg_decrypt() is referenced using the page encryption bitmap.

Changes since v7:
- Removed the hypervisor specific hypercall/paravirt callback for
  SEV live migration and moved back to calling kvm_sev_hypercall3 
  directly.
- Fix build errors as
  Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
  build error when CONFIG_HYPERVISOR_GUEST=y and
  CONFIG_AMD_MEM_ENCRYPT=n.
- Implicitly enabled live migration for incoming VM(s) to handle 
  A->B->C->... VM migrations.
- Fixed Documentation as per comments on v6 patches.
- Fixed error return path in sev_send_update_data() as per comments 
  on v6 patches. 

Changes since v6:
- Rebasing to mainline and refactoring to the new split SVM
  infrastructre.
- Move to static allocation of the unified Page Encryption bitmap
  instead of the dynamic resizing of the bitmap, the static allocation
  is done implicitly by extending kvm_arch_commit_memory_region() callack
  to add svm specific x86_ops which can read the userspace provided memory
  region/memslots and calculate the amount of guest RAM managed by the KVM
  and grow the bitmap.
- Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
  of simply clearing specific bits.
- Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
  KVM_SET_PAGE_ENC_BITMAP.
- Extended guest support for enabling Live Migration feature by adding a
  check for UEFI environment variable indicating OVMF support for Live
  Migration feature and additionally checking for KVM capability for the
  same feature. If not booted under EFI, then we simply check for KVM
  capability.
- Add hypervisor specific hypercall for SEV live migration by adding
  a new paravirt callback as part of x86_hyper_runtime.
  (x86 hypervisor specific runtime callbacks)
- Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code 
  and adding check for SEV live migration enabled by guest in the 
  KVM_GET_PAGE_ENC_BITMAP ioctl.
- Instead of the complete __bss_decrypted section, only specific variables
  such as hv_clock_boot and wall_clock are marked as decrypted in the
  page encryption bitmap

Changes since v5:
- Fix build errors as
  Reported-by: kbuild test robot <lkp@intel.com>

Changes since v4:
- Host support has been added to extend KVM capabilities/feature bits to 
  include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
  query for host-side support for SEV live migration and a new custom MSR
  MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
  migration feature.
- Ensure that _bss_decrypted section is marked as decrypted in the
  page encryption bitmap.
- Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
  as per the number of pages being requested by the user. Ensure that
  we only copy bmap->num_pages bytes in the userspace buffer, if
  bmap->num_pages is not byte aligned we read the trailing bits
  from the userspace and copy those bits as is. This fixes guest
  page(s) corruption issues observed after migration completion.
- Add kexec support for SEV Live Migration to reset the host's
  page encryption bitmap related to kernel specific page encryption
  status settings before we load a new kernel by kexec. We cannot
  reset the complete page encryption bitmap here as we need to
  retain the UEFI/OVMF firmware specific settings.

Changes since v3:
- Rebasing to mainline and testing.
- Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the 
  page encryption bitmap on a guest reboot event.
- Adding a more reliable sanity check for GPA range being passed to
  the hypercall to ensure that guest MMIO ranges are also marked
  in the page encryption bitmap.

Changes since v2:
 - reset the page encryption bitmap on vcpu reboot

Changes since v1:
 - Add support to share the page encryption between the source and target
   machine.
 - Fix review feedbacks from Tom Lendacky.
 - Add check to limit the session blob length.
 - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
   the memory slot when querying the bitmap.

Ashish Kalra (4):
  KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
    Custom MSR.
  EFI: Introduce the new AMD Memory Encryption GUID.
  x86/kvm: Add guest support for detecting and enabling SEV Live
    Migration feature.

Brijesh Singh (8):
  KVM: SVM: Add KVM_SEV SEND_START command
  KVM: SVM: Add KVM_SEND_UPDATE_DATA command
  KVM: SVM: Add KVM_SEV_SEND_FINISH command
  KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
  KVM: x86: Add AMD SEV specific Hypercall3
  mm: x86: Invoke hypercall when page encryption status is changed

 .../virt/kvm/amd-memory-encryption.rst        | 120 +++++
 Documentation/virt/kvm/cpuid.rst              |   5 +
 Documentation/virt/kvm/hypercalls.rst         |  15 +
 Documentation/virt/kvm/msr.rst                |  12 +
 arch/x86/include/asm/kvm_host.h               |   2 +
 arch/x86/include/asm/kvm_para.h               |  12 +
 arch/x86/include/asm/mem_encrypt.h            |   8 +
 arch/x86/include/asm/paravirt.h               |  10 +
 arch/x86/include/asm/paravirt_types.h         |   2 +
 arch/x86/include/uapi/asm/kvm_para.h          |   4 +
 arch/x86/kernel/kvm.c                         |  55 +++
 arch/x86/kernel/paravirt.c                    |   1 +
 arch/x86/kvm/cpuid.c                          |   3 +-
 arch/x86/kvm/svm/sev.c                        | 454 ++++++++++++++++++
 arch/x86/kvm/x86.c                            |  29 ++
 arch/x86/mm/mem_encrypt.c                     | 121 ++++-
 arch/x86/mm/pat/set_memory.c                  |   7 +
 include/linux/efi.h                           |   1 +
 include/linux/psp-sev.h                       |   8 +-
 include/uapi/linux/kvm.h                      |  39 ++
 include/uapi/linux/kvm_para.h                 |   1 +
 21 files changed, 903 insertions(+), 6 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
@ 2021-04-15 15:53 ` Ashish Kalra
  2021-04-20  8:50   ` Paolo Bonzini
  2021-04-15 15:53 ` [PATCH v13 02/12] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:53 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

The command is used to create an outgoing SEV guest encryption context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  27 ++++
 arch/x86/kvm/svm/sev.c                        | 125 ++++++++++++++++++
 include/linux/psp-sev.h                       |   8 +-
 include/uapi/linux/kvm.h                      |  12 ++
 4 files changed, 168 insertions(+), 4 deletions(-)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 469a6308765b..ac799dd7a618 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -284,6 +284,33 @@ Returns: 0 on success, -negative on error
                 __u32 len;
         };
 
+10. KVM_SEV_SEND_START
+----------------------
+
+The KVM_SEV_SEND_START command can be used by the hypervisor to create an
+outgoing guest encryption context.
+
+Parameters (in): struct kvm_sev_send_start
+
+Returns: 0 on success, -negative on error
+
+::
+        struct kvm_sev_send_start {
+                __u32 policy;                 /* guest policy */
+
+                __u64 pdh_cert_uaddr;         /* platform Diffie-Hellman certificate */
+                __u32 pdh_cert_len;
+
+                __u64 plat_certs_uaddr;        /* platform certificate chain */
+                __u32 plat_certs_len;
+
+                __u64 amd_certs_uaddr;        /* AMD certificate */
+                __u32 amd_certs_len;
+
+                __u64 session_uaddr;          /* Guest session information */
+                __u32 session_len;
+        };
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 874ea309279f..2b65900c05d6 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1110,6 +1110,128 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+/* Userspace wants to query session length. */
+static int
+__sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
+				      struct kvm_sev_send_start *params)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_start *data;
+	int ret;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+	if (data == NULL)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, data, &argp->error);
+
+	params->session_len = data->session_len;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, params,
+				sizeof(struct kvm_sev_send_start)))
+		ret = -EFAULT;
+
+	kfree(data);
+	return ret;
+}
+
+static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_start *data;
+	struct kvm_sev_send_start params;
+	void *amd_certs, *session_data;
+	void *pdh_cert, *plat_certs;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+				sizeof(struct kvm_sev_send_start)))
+		return -EFAULT;
+
+	/* if session_len is zero, userspace wants to query the session length */
+	if (!params.session_len)
+		return __sev_send_start_query_session_length(kvm, argp,
+				&params);
+
+	/* some sanity checks */
+	if (!params.pdh_cert_uaddr || !params.pdh_cert_len ||
+	    !params.session_uaddr || params.session_len > SEV_FW_BLOB_MAX_SIZE)
+		return -EINVAL;
+
+	/* allocate the memory to hold the session data blob */
+	session_data = kmalloc(params.session_len, GFP_KERNEL_ACCOUNT);
+	if (!session_data)
+		return -ENOMEM;
+
+	/* copy the certificate blobs from userspace */
+	pdh_cert = psp_copy_user_blob(params.pdh_cert_uaddr,
+				params.pdh_cert_len);
+	if (IS_ERR(pdh_cert)) {
+		ret = PTR_ERR(pdh_cert);
+		goto e_free_session;
+	}
+
+	plat_certs = psp_copy_user_blob(params.plat_certs_uaddr,
+				params.plat_certs_len);
+	if (IS_ERR(plat_certs)) {
+		ret = PTR_ERR(plat_certs);
+		goto e_free_pdh;
+	}
+
+	amd_certs = psp_copy_user_blob(params.amd_certs_uaddr,
+				params.amd_certs_len);
+	if (IS_ERR(amd_certs)) {
+		ret = PTR_ERR(amd_certs);
+		goto e_free_plat_cert;
+	}
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+	if (data == NULL) {
+		ret = -ENOMEM;
+		goto e_free_amd_cert;
+	}
+
+	/* populate the FW SEND_START field with system physical address */
+	data->pdh_cert_address = __psp_pa(pdh_cert);
+	data->pdh_cert_len = params.pdh_cert_len;
+	data->plat_certs_address = __psp_pa(plat_certs);
+	data->plat_certs_len = params.plat_certs_len;
+	data->amd_certs_address = __psp_pa(amd_certs);
+	data->amd_certs_len = params.amd_certs_len;
+	data->session_address = __psp_pa(session_data);
+	data->session_len = params.session_len;
+	data->handle = sev->handle;
+
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, data, &argp->error);
+
+	if (!ret && copy_to_user((void __user *)(uintptr_t)params.session_uaddr,
+			session_data, params.session_len)) {
+		ret = -EFAULT;
+		goto e_free;
+	}
+
+	params.policy = data->policy;
+	params.session_len = data->session_len;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
+				sizeof(struct kvm_sev_send_start)))
+		ret = -EFAULT;
+
+e_free:
+	kfree(data);
+e_free_amd_cert:
+	kfree(amd_certs);
+e_free_plat_cert:
+	kfree(plat_certs);
+e_free_pdh:
+	kfree(pdh_cert);
+e_free_session:
+	kfree(session_data);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1163,6 +1285,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_GET_ATTESTATION_REPORT:
 		r = sev_get_attestation_report(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SEND_START:
+		r = sev_send_start(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/include/linux/psp-sev.h b/include/linux/psp-sev.h
index b801ead1e2bb..73da511b9423 100644
--- a/include/linux/psp-sev.h
+++ b/include/linux/psp-sev.h
@@ -326,11 +326,11 @@ struct sev_data_send_start {
 	u64 pdh_cert_address;			/* In */
 	u32 pdh_cert_len;			/* In */
 	u32 reserved1;
-	u64 plat_cert_address;			/* In */
-	u32 plat_cert_len;			/* In */
+	u64 plat_certs_address;			/* In */
+	u32 plat_certs_len;			/* In */
 	u32 reserved2;
-	u64 amd_cert_address;			/* In */
-	u32 amd_cert_len;			/* In */
+	u64 amd_certs_address;			/* In */
+	u32 amd_certs_len;			/* In */
 	u32 reserved3;
 	u64 session_address;			/* In */
 	u32 session_len;			/* In/Out */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f6afee209620..ac53ad2e7271 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1729,6 +1729,18 @@ struct kvm_sev_attestation_report {
 	__u32 len;
 };
 
+struct kvm_sev_send_start {
+	__u32 policy;
+	__u64 pdh_cert_uaddr;
+	__u32 pdh_cert_len;
+	__u64 plat_certs_uaddr;
+	__u32 plat_certs_len;
+	__u64 amd_certs_uaddr;
+	__u32 amd_certs_len;
+	__u64 session_uaddr;
+	__u32 session_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 02/12] KVM: SVM: Add KVM_SEND_UPDATE_DATA command
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
  2021-04-15 15:53 ` [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
@ 2021-04-15 15:53 ` Ashish Kalra
  2021-04-15 15:54 ` [PATCH v13 03/12] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:53 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

The command is used for encrypting the guest memory region using the encryption
context created with KVM_SEV_SEND_START.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by : Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  24 ++++
 arch/x86/kvm/svm/sev.c                        | 122 ++++++++++++++++++
 include/uapi/linux/kvm.h                      |   9 ++
 3 files changed, 155 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index ac799dd7a618..3c5456e0268a 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -311,6 +311,30 @@ Returns: 0 on success, -negative on error
                 __u32 session_len;
         };
 
+11. KVM_SEV_SEND_UPDATE_DATA
+----------------------------
+
+The KVM_SEV_SEND_UPDATE_DATA command can be used by the hypervisor to encrypt the
+outgoing guest memory region with the encryption context creating using
+KVM_SEV_SEND_START.
+
+Parameters (in): struct kvm_sev_send_update_data
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_launch_send_update_data {
+                __u64 hdr_uaddr;        /* userspace address containing the packet header */
+                __u32 hdr_len;
+
+                __u64 guest_uaddr;      /* the source memory region to be encrypted */
+                __u32 guest_len;
+
+                __u64 trans_uaddr;      /* the destition memory region  */
+                __u32 trans_len;
+        };
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 2b65900c05d6..30527285a39a 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -34,6 +34,7 @@ static DECLARE_RWSEM(sev_deactivate_lock);
 static DEFINE_MUTEX(sev_bitmap_lock);
 unsigned int max_sev_asid;
 static unsigned int min_sev_asid;
+static unsigned long sev_me_mask;
 static unsigned long *sev_asid_bitmap;
 static unsigned long *sev_reclaim_asid_bitmap;
 
@@ -1232,6 +1233,123 @@ static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+/* Userspace wants to query either header or trans length. */
+static int
+__sev_send_update_data_query_lengths(struct kvm *kvm, struct kvm_sev_cmd *argp,
+				     struct kvm_sev_send_update_data *params)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_update_data *data;
+	int ret;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, data, &argp->error);
+
+	params->hdr_len = data->hdr_len;
+	params->trans_len = data->trans_len;
+
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, params,
+			 sizeof(struct kvm_sev_send_update_data)))
+		ret = -EFAULT;
+
+	kfree(data);
+	return ret;
+}
+
+static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_update_data *data;
+	struct kvm_sev_send_update_data params;
+	void *hdr, *trans_data;
+	struct page **guest_page;
+	unsigned long n;
+	int ret, offset;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			sizeof(struct kvm_sev_send_update_data)))
+		return -EFAULT;
+
+	/* userspace wants to query either header or trans length */
+	if (!params.trans_len || !params.hdr_len)
+		return __sev_send_update_data_query_lengths(kvm, argp, &params);
+
+	if (!params.trans_uaddr || !params.guest_uaddr ||
+	    !params.guest_len || !params.hdr_uaddr)
+		return -EINVAL;
+
+	/* Check if we are crossing the page boundary */
+	offset = params.guest_uaddr & (PAGE_SIZE - 1);
+	if ((params.guest_len + offset > PAGE_SIZE))
+		return -EINVAL;
+
+	/* Pin guest memory */
+	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+				    PAGE_SIZE, &n, 0);
+	if (!guest_page)
+		return -EFAULT;
+
+	/* allocate memory for header and transport buffer */
+	ret = -ENOMEM;
+	hdr = kmalloc(params.hdr_len, GFP_KERNEL_ACCOUNT);
+	if (!hdr)
+		goto e_unpin;
+
+	trans_data = kmalloc(params.trans_len, GFP_KERNEL_ACCOUNT);
+	if (!trans_data)
+		goto e_free_hdr;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		goto e_free_trans_data;
+
+	data->hdr_address = __psp_pa(hdr);
+	data->hdr_len = params.hdr_len;
+	data->trans_address = __psp_pa(trans_data);
+	data->trans_len = params.trans_len;
+
+	/* The SEND_UPDATE_DATA command requires C-bit to be always set. */
+	data->guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) +
+				offset;
+	data->guest_address |= sev_me_mask;
+	data->guest_len = params.guest_len;
+	data->handle = sev->handle;
+
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, data, &argp->error);
+
+	if (ret)
+		goto e_free;
+
+	/* copy transport buffer to user space */
+	if (copy_to_user((void __user *)(uintptr_t)params.trans_uaddr,
+			 trans_data, params.trans_len)) {
+		ret = -EFAULT;
+		goto e_free;
+	}
+
+	/* Copy packet header to userspace. */
+	ret = copy_to_user((void __user *)(uintptr_t)params.hdr_uaddr, hdr,
+				params.hdr_len);
+
+e_free:
+	kfree(data);
+e_free_trans_data:
+	kfree(trans_data);
+e_free_hdr:
+	kfree(hdr);
+e_unpin:
+	sev_unpin_memory(kvm, guest_page, n);
+
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1288,6 +1406,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SEND_START:
 		r = sev_send_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SEND_UPDATE_DATA:
+		r = sev_send_update_data(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
@@ -1467,6 +1588,7 @@ void __init sev_hardware_setup(void)
 
 	/* Minimum ASID value that should be used for SEV guest */
 	min_sev_asid = edx;
+	sev_me_mask = 1UL << (ebx & 0x3f);
 
 	/* Initialize SEV ASID bitmaps */
 	sev_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index ac53ad2e7271..d45af34c31be 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1741,6 +1741,15 @@ struct kvm_sev_send_start {
 	__u32 session_len;
 };
 
+struct kvm_sev_send_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 03/12] KVM: SVM: Add KVM_SEV_SEND_FINISH command
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
  2021-04-15 15:53 ` [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
  2021-04-15 15:53 ` [PATCH v13 02/12] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
@ 2021-04-15 15:54 ` Ashish Kalra
  2021-04-15 15:54 ` [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:54 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

The command is used to finailize the encryption context created with
KVM_SEV_SEND_START command.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  8 +++++++
 arch/x86/kvm/svm/sev.c                        | 23 +++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 3c5456e0268a..26c4e6c83f62 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -335,6 +335,14 @@ Returns: 0 on success, -negative on error
                 __u32 trans_len;
         };
 
+12. KVM_SEV_SEND_FINISH
+------------------------
+
+After completion of the migration flow, the KVM_SEV_SEND_FINISH command can be
+issued by the hypervisor to delete the encryption context.
+
+Returns: 0 on success, -negative on error
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 30527285a39a..92325d9527ce 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1350,6 +1350,26 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_send_finish *data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_FINISH, data, &argp->error);
+
+	kfree(data);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1409,6 +1429,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SEND_UPDATE_DATA:
 		r = sev_send_update_data(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SEND_FINISH:
+		r = sev_send_finish(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (2 preceding siblings ...)
  2021-04-15 15:54 ` [PATCH v13 03/12] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
@ 2021-04-15 15:54 ` Ashish Kalra
  2021-04-20  8:38   ` Paolo Bonzini
  2021-04-15 15:55 ` [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:54 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

The command is used to create the encryption context for an incoming
SEV guest. The encryption context can be later used by the hypervisor
to import the incoming data into the SEV guest memory space.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        | 29 +++++++
 arch/x86/kvm/svm/sev.c                        | 81 +++++++++++++++++++
 include/uapi/linux/kvm.h                      |  9 +++
 3 files changed, 119 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index 26c4e6c83f62..c86c1ded8dd8 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -343,6 +343,35 @@ issued by the hypervisor to delete the encryption context.
 
 Returns: 0 on success, -negative on error
 
+13. KVM_SEV_RECEIVE_START
+------------------------
+
+The KVM_SEV_RECEIVE_START command is used for creating the memory encryption
+context for an incoming SEV guest. To create the encryption context, the user must
+provide a guest policy, the platform public Diffie-Hellman (PDH) key and session
+information.
+
+Parameters: struct  kvm_sev_receive_start (in/out)
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_receive_start {
+                __u32 handle;           /* if zero then firmware creates a new handle */
+                __u32 policy;           /* guest's policy */
+
+                __u64 pdh_uaddr;        /* userspace address pointing to the PDH key */
+                __u32 pdh_len;
+
+                __u64 session_uaddr;    /* userspace address which points to the guest session information */
+                __u32 session_len;
+        };
+
+On success, the 'handle' field contains a new handle and on error, a negative value.
+
+For more details, see SEV spec Section 6.12.
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 92325d9527ce..e530c2b34b5e 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1370,6 +1370,84 @@ static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_receive_start *start;
+	struct kvm_sev_receive_start params;
+	int *error = &argp->error;
+	void *session_data;
+	void *pdh_data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	/* Get parameter from the userspace */
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			sizeof(struct kvm_sev_receive_start)))
+		return -EFAULT;
+
+	/* some sanity checks */
+	if (!params.pdh_uaddr || !params.pdh_len ||
+	    !params.session_uaddr || !params.session_len)
+		return -EINVAL;
+
+	pdh_data = psp_copy_user_blob(params.pdh_uaddr, params.pdh_len);
+	if (IS_ERR(pdh_data))
+		return PTR_ERR(pdh_data);
+
+	session_data = psp_copy_user_blob(params.session_uaddr,
+			params.session_len);
+	if (IS_ERR(session_data)) {
+		ret = PTR_ERR(session_data);
+		goto e_free_pdh;
+	}
+
+	ret = -ENOMEM;
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		goto e_free_session;
+
+	start->handle = params.handle;
+	start->policy = params.policy;
+	start->pdh_cert_address = __psp_pa(pdh_data);
+	start->pdh_cert_len = params.pdh_len;
+	start->session_address = __psp_pa(session_data);
+	start->session_len = params.session_len;
+
+	/* create memory encryption context */
+	ret = __sev_issue_cmd(argp->sev_fd, SEV_CMD_RECEIVE_START, start,
+				error);
+	if (ret)
+		goto e_free;
+
+	/* Bind ASID to this guest */
+	ret = sev_bind_asid(kvm, start->handle, error);
+	if (ret)
+		goto e_free;
+
+	params.handle = start->handle;
+	if (copy_to_user((void __user *)(uintptr_t)argp->data,
+			 &params, sizeof(struct kvm_sev_receive_start))) {
+		ret = -EFAULT;
+		sev_unbind_asid(kvm, start->handle);
+		goto e_free;
+	}
+
+	sev->handle = start->handle;
+	sev->fd = argp->sev_fd;
+
+e_free:
+	kfree(start);
+e_free_session:
+	kfree(session_data);
+e_free_pdh:
+	kfree(pdh_data);
+
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1432,6 +1510,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SEND_FINISH:
 		r = sev_send_finish(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_RECEIVE_START:
+		r = sev_receive_start(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index d45af34c31be..29c25e641a0c 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1750,6 +1750,15 @@ struct kvm_sev_send_update_data {
 	__u32 trans_len;
 };
 
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 policy;
+	__u64 pdh_uaddr;
+	__u32 pdh_len;
+	__u64 session_uaddr;
+	__u32 session_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (3 preceding siblings ...)
  2021-04-15 15:54 ` [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
@ 2021-04-15 15:55 ` Ashish Kalra
  2021-04-20  8:40   ` Paolo Bonzini
  2021-04-15 15:55 ` [PATCH v13 06/12] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:55 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

The command is used for copying the incoming buffer into the
SEV guest memory space.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        | 24 ++++++
 arch/x86/kvm/svm/sev.c                        | 79 +++++++++++++++++++
 include/uapi/linux/kvm.h                      |  9 +++
 3 files changed, 112 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index c86c1ded8dd8..c6ed5b26d841 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -372,6 +372,30 @@ On success, the 'handle' field contains a new handle and on error, a negative va
 
 For more details, see SEV spec Section 6.12.
 
+14. KVM_SEV_RECEIVE_UPDATE_DATA
+----------------------------
+
+The KVM_SEV_RECEIVE_UPDATE_DATA command can be used by the hypervisor to copy
+the incoming buffers into the guest memory region with encryption context
+created during the KVM_SEV_RECEIVE_START.
+
+Parameters (in): struct kvm_sev_receive_update_data
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_launch_receive_update_data {
+                __u64 hdr_uaddr;        /* userspace address containing the packet header */
+                __u32 hdr_len;
+
+                __u64 guest_uaddr;      /* the destination guest memory region */
+                __u32 guest_len;
+
+                __u64 trans_uaddr;      /* the incoming buffer memory region  */
+                __u32 trans_len;
+        };
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index e530c2b34b5e..2c95657cc9bf 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1448,6 +1448,82 @@ static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct kvm_sev_receive_update_data params;
+	struct sev_data_receive_update_data *data;
+	void *hdr = NULL, *trans = NULL;
+	struct page **guest_page;
+	unsigned long n;
+	int ret, offset;
+
+	if (!sev_guest(kvm))
+		return -EINVAL;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
+			sizeof(struct kvm_sev_receive_update_data)))
+		return -EFAULT;
+
+	if (!params.hdr_uaddr || !params.hdr_len ||
+	    !params.guest_uaddr || !params.guest_len ||
+	    !params.trans_uaddr || !params.trans_len)
+		return -EINVAL;
+
+	/* Check if we are crossing the page boundary */
+	offset = params.guest_uaddr & (PAGE_SIZE - 1);
+	if ((params.guest_len + offset > PAGE_SIZE))
+		return -EINVAL;
+
+	hdr = psp_copy_user_blob(params.hdr_uaddr, params.hdr_len);
+	if (IS_ERR(hdr))
+		return PTR_ERR(hdr);
+
+	trans = psp_copy_user_blob(params.trans_uaddr, params.trans_len);
+	if (IS_ERR(trans)) {
+		ret = PTR_ERR(trans);
+		goto e_free_hdr;
+	}
+
+	ret = -ENOMEM;
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		goto e_free_trans;
+
+	data->hdr_address = __psp_pa(hdr);
+	data->hdr_len = params.hdr_len;
+	data->trans_address = __psp_pa(trans);
+	data->trans_len = params.trans_len;
+
+	/* Pin guest memory */
+	ret = -EFAULT;
+	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+				    PAGE_SIZE, &n, 0);
+	if (!guest_page)
+		goto e_free;
+
+	/* The RECEIVE_UPDATE_DATA command requires C-bit to be always set. */
+	data->guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) +
+				offset;
+	data->guest_address |= sev_me_mask;
+	data->guest_len = params.guest_len;
+	data->handle = sev->handle;
+
+	ret = sev_issue_cmd(kvm, SEV_CMD_RECEIVE_UPDATE_DATA, data,
+				&argp->error);
+
+	sev_unpin_memory(kvm, guest_page, n);
+
+e_free:
+	kfree(data);
+e_free_trans:
+	kfree(trans);
+e_free_hdr:
+	kfree(hdr);
+
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1513,6 +1589,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_RECEIVE_START:
 		r = sev_receive_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_RECEIVE_UPDATE_DATA:
+		r = sev_receive_update_data(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 29c25e641a0c..3a656d43fc6c 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1759,6 +1759,15 @@ struct kvm_sev_receive_start {
 	__u32 session_len;
 };
 
+struct kvm_sev_receive_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+};
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 06/12] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (4 preceding siblings ...)
  2021-04-15 15:55 ` [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
@ 2021-04-15 15:55 ` Ashish Kalra
  2021-04-15 15:56 ` [PATCH v13 07/12] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:55 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

The command finalize the guest receiving process and make the SEV guest
ready for the execution.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/amd-memory-encryption.rst        |  8 +++++++
 arch/x86/kvm/svm/sev.c                        | 23 +++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
index c6ed5b26d841..0466c0febff9 100644
--- a/Documentation/virt/kvm/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/amd-memory-encryption.rst
@@ -396,6 +396,14 @@ Returns: 0 on success, -negative on error
                 __u32 trans_len;
         };
 
+15. KVM_SEV_RECEIVE_FINISH
+------------------------
+
+After completion of the migration flow, the KVM_SEV_RECEIVE_FINISH command can be
+issued by the hypervisor to make the guest ready for execution.
+
+Returns: 0 on success, -negative on error
+
 References
 ==========
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 2c95657cc9bf..c9795a22e502 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1524,6 +1524,26 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_data_receive_finish *data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, data, &argp->error);
+
+	kfree(data);
+	return ret;
+}
+
 int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -1592,6 +1612,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_RECEIVE_UPDATE_DATA:
 		r = sev_receive_update_data(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_RECEIVE_FINISH:
+		r = sev_receive_finish(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 07/12] KVM: x86: Add AMD SEV specific Hypercall3
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (5 preceding siblings ...)
  2021-04-15 15:55 ` [PATCH v13 06/12] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
@ 2021-04-15 15:56 ` Ashish Kalra
  2021-04-15 15:57 ` [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:56 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

KVM hypercall framework relies on alternative framework to patch the
VMCALL -> VMMCALL on AMD platform. If a hypercall is made before
apply_alternative() is called then it defaults to VMCALL. The approach
works fine on non SEV guest. A VMCALL would causes #UD, and hypervisor
will be able to decode the instruction and do the right things. But
when SEV is active, guest memory is encrypted with guest key and
hypervisor will not be able to decode the instruction bytes.

Add SEV specific hypercall3, it unconditionally uses VMMCALL. The hypercall
will be used by the SEV guest to notify encrypted pages to the hypervisor.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/kvm_para.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 338119852512..bc1b11d057fc 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -85,6 +85,18 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
 	return ret;
 }
 
+static inline long kvm_sev_hypercall3(unsigned int nr, unsigned long p1,
+				      unsigned long p2, unsigned long p3)
+{
+	long ret;
+
+	asm volatile("vmmcall"
+		     : "=a"(ret)
+		     : "a"(nr), "b"(p1), "c"(p2), "d"(p3)
+		     : "memory");
+	return ret;
+}
+
 #ifdef CONFIG_KVM_GUEST
 bool kvm_para_available(void);
 unsigned int kvm_arch_para_features(void);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (6 preceding siblings ...)
  2021-04-15 15:56 ` [PATCH v13 07/12] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
@ 2021-04-15 15:57 ` Ashish Kalra
  2021-04-20 11:10   ` Paolo Bonzini
  2021-04-15 15:57 ` [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:57 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

This hypercall is used by the SEV guest to notify a change in the page
encryption status to the hypervisor. The hypercall should be invoked
only when the encryption attribute is changed from encrypted -> decrypted
and vice versa. By default all guest pages are considered encrypted.

The hypercall exits to userspace to manage the guest shared regions and
integrate with the userspace VMM's migration code.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 Documentation/virt/kvm/hypercalls.rst | 15 ++++++++++++++
 arch/x86/include/asm/kvm_host.h       |  2 ++
 arch/x86/kvm/svm/sev.c                |  1 +
 arch/x86/kvm/x86.c                    | 29 +++++++++++++++++++++++++++
 include/uapi/linux/kvm_para.h         |  1 +
 5 files changed, 48 insertions(+)

diff --git a/Documentation/virt/kvm/hypercalls.rst b/Documentation/virt/kvm/hypercalls.rst
index ed4fddd364ea..7aff0cebab7c 100644
--- a/Documentation/virt/kvm/hypercalls.rst
+++ b/Documentation/virt/kvm/hypercalls.rst
@@ -169,3 +169,18 @@ a0: destination APIC ID
 
 :Usage example: When sending a call-function IPI-many to vCPUs, yield if
 	        any of the IPI target vCPUs was preempted.
+
+
+8. KVM_HC_PAGE_ENC_STATUS
+-------------------------
+:Architecture: x86
+:Status: active
+:Purpose: Notify the encryption status changes in guest page table (SEV guest)
+
+a0: the guest physical address of the start page
+a1: the number of pages
+a2: encryption attribute
+
+   Where:
+	* 1: Encryption attribute is set
+	* 0: Encryption attribute is cleared
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3768819693e5..42eb0fe3df5d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1050,6 +1050,8 @@ struct kvm_arch {
 
 	bool bus_lock_detection_enabled;
 
+	bool page_enc_hc_enable;
+
 	/* Deflect RDMSR and WRMSR to user space when they trigger a #GP */
 	u32 user_space_msr_mask;
 	struct kvm_x86_msr_filter __rcu *msr_filter;
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c9795a22e502..5184a0c0131a 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -197,6 +197,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	sev->active = true;
 	sev->asid = asid;
 	INIT_LIST_HEAD(&sev->regions_list);
+	kvm->arch.page_enc_hc_enable = true;
 
 	return 0;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f7d12fca397b..e8986478b653 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8208,6 +8208,13 @@ static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
 		kvm_vcpu_yield_to(target);
 }
 
+static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
+{
+	kvm_rax_write(vcpu, vcpu->run->hypercall.ret);
+	++vcpu->stat.hypercalls;
+	return kvm_skip_emulated_instruction(vcpu);
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -8273,6 +8280,28 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		kvm_sched_yield(vcpu->kvm, a0);
 		ret = 0;
 		break;
+	case KVM_HC_PAGE_ENC_STATUS: {
+		u64 gpa = a0, npages = a1, enc = a2;
+
+		ret = -KVM_ENOSYS;
+		if (!vcpu->kvm->arch.page_enc_hc_enable)
+			break;
+
+		if (!PAGE_ALIGNED(gpa) || !npages ||
+		    gpa_to_gfn(gpa) + npages <= gpa_to_gfn(gpa)) {
+			ret = -EINVAL;
+			break;
+		}
+
+		vcpu->run->exit_reason        = KVM_EXIT_HYPERCALL;
+		vcpu->run->hypercall.nr       = KVM_HC_PAGE_ENC_STATUS;
+		vcpu->run->hypercall.args[0]  = gpa;
+		vcpu->run->hypercall.args[1]  = npages;
+		vcpu->run->hypercall.args[2]  = enc;
+		vcpu->run->hypercall.longmode = op_64_bit;
+		vcpu->arch.complete_userspace_io = complete_hypercall_exit;
+		return 0;
+	}
 	default:
 		ret = -KVM_ENOSYS;
 		break;
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index 8b86609849b9..847b83b75dc8 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -29,6 +29,7 @@
 #define KVM_HC_CLOCK_PAIRING		9
 #define KVM_HC_SEND_IPI		10
 #define KVM_HC_SCHED_YIELD		11
+#define KVM_HC_PAGE_ENC_STATUS		12
 
 /*
  * hypercalls use architecture specific
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (7 preceding siblings ...)
  2021-04-15 15:57 ` [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
@ 2021-04-15 15:57 ` Ashish Kalra
  2021-04-20  9:39   ` Paolo Bonzini
  2021-04-21 10:05   ` Borislav Petkov
  2021-04-15 15:58 ` [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:57 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Brijesh Singh <brijesh.singh@amd.com>

Invoke a hypercall when a memory region is changed from encrypted ->
decrypted and vice versa. Hypervisor needs to know the page encryption
status during the guest migration.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/paravirt.h       | 10 +++++
 arch/x86/include/asm/paravirt_types.h |  2 +
 arch/x86/kernel/paravirt.c            |  1 +
 arch/x86/mm/mem_encrypt.c             | 57 ++++++++++++++++++++++++++-
 arch/x86/mm/pat/set_memory.c          |  7 ++++
 5 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 4abf110e2243..efaa3e628967 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -84,6 +84,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
 	PVOP_VCALL1(mmu.exit_mmap, mm);
 }
 
+static inline void page_encryption_changed(unsigned long vaddr, int npages,
+						bool enc)
+{
+	PVOP_VCALL3(mmu.page_encryption_changed, vaddr, npages, enc);
+}
+
 #ifdef CONFIG_PARAVIRT_XXL
 static inline void load_sp0(unsigned long sp0)
 {
@@ -799,6 +805,10 @@ static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
 static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
 {
 }
+
+static inline void page_encryption_changed(unsigned long vaddr, int npages, bool enc)
+{
+}
 #endif
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_PARAVIRT_H */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index de87087d3bde..69ef9c207b38 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -195,6 +195,8 @@ struct pv_mmu_ops {
 
 	/* Hook for intercepting the destruction of an mm_struct. */
 	void (*exit_mmap)(struct mm_struct *mm);
+	void (*page_encryption_changed)(unsigned long vaddr, int npages,
+					bool enc);
 
 #ifdef CONFIG_PARAVIRT_XXL
 	struct paravirt_callee_save read_cr2;
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index c60222ab8ab9..9f206e192f6b 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -335,6 +335,7 @@ struct paravirt_patch_template pv_ops = {
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
 
 	.mmu.exit_mmap		= paravirt_nop,
+	.mmu.page_encryption_changed	= paravirt_nop,
 
 #ifdef CONFIG_PARAVIRT_XXL
 	.mmu.read_cr2		= __PV_IS_CALLEE_SAVE(native_read_cr2),
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ae78cef79980..fae9ccbd0da7 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -19,6 +19,7 @@
 #include <linux/kernel.h>
 #include <linux/bitops.h>
 #include <linux/dma-mapping.h>
+#include <linux/kvm_para.h>
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
@@ -29,6 +30,7 @@
 #include <asm/processor-flags.h>
 #include <asm/msr.h>
 #include <asm/cmdline.h>
+#include <asm/kvm_para.h>
 
 #include "mm_internal.h"
 
@@ -229,6 +231,47 @@ void __init sev_setup_arch(void)
 	swiotlb_adjust_size(size);
 }
 
+static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
+					bool enc)
+{
+	unsigned long sz = npages << PAGE_SHIFT;
+	unsigned long vaddr_end, vaddr_next;
+
+	vaddr_end = vaddr + sz;
+
+	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
+		int psize, pmask, level;
+		unsigned long pfn;
+		pte_t *kpte;
+
+		kpte = lookup_address(vaddr, &level);
+		if (!kpte || pte_none(*kpte))
+			return;
+
+		switch (level) {
+		case PG_LEVEL_4K:
+			pfn = pte_pfn(*kpte);
+			break;
+		case PG_LEVEL_2M:
+			pfn = pmd_pfn(*(pmd_t *)kpte);
+			break;
+		case PG_LEVEL_1G:
+			pfn = pud_pfn(*(pud_t *)kpte);
+			break;
+		default:
+			return;
+		}
+
+		psize = page_level_size(level);
+		pmask = page_level_mask(level);
+
+		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
+				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
+
+		vaddr_next = (vaddr & pmask) + psize;
+	}
+}
+
 static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 {
 	pgprot_t old_prot, new_prot;
@@ -286,12 +329,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 static int __init early_set_memory_enc_dec(unsigned long vaddr,
 					   unsigned long size, bool enc)
 {
-	unsigned long vaddr_end, vaddr_next;
+	unsigned long vaddr_end, vaddr_next, start;
 	unsigned long psize, pmask;
 	int split_page_size_mask;
 	int level, ret;
 	pte_t *kpte;
 
+	start = vaddr;
 	vaddr_next = vaddr;
 	vaddr_end = vaddr + size;
 
@@ -346,6 +390,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
 
 	ret = 0;
 
+	set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
+					enc);
 out:
 	__flush_tlb_all();
 	return ret;
@@ -481,6 +527,15 @@ void __init mem_encrypt_init(void)
 	if (sev_active() && !sev_es_active())
 		static_branch_enable(&sev_enable_key);
 
+#ifdef CONFIG_PARAVIRT
+	/*
+	 * With SEV, we need to make a hypercall when page encryption state is
+	 * changed.
+	 */
+	if (sev_active())
+		pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
+#endif
+
 	print_mem_encrypt_feature_info();
 }
 
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..3576b583ac65 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -27,6 +27,7 @@
 #include <asm/proto.h>
 #include <asm/memtype.h>
 #include <asm/set_memory.h>
+#include <asm/paravirt.h>
 
 #include "../mm_internal.h"
 
@@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
 	 */
 	cpa_flush(&cpa, 0);
 
+	/* Notify hypervisor that a given memory range is mapped encrypted
+	 * or decrypted. The hypervisor will use this information during the
+	 * VM migration.
+	 */
+	page_encryption_changed(addr, numpages, enc);
+
 	return ret;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (8 preceding siblings ...)
  2021-04-15 15:57 ` [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
@ 2021-04-15 15:58 ` Ashish Kalra
  2021-04-19 23:06   ` Sean Christopherson
  2021-04-20  9:47   ` Paolo Bonzini
  2021-04-15 15:58 ` [PATCH v13 11/12] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
                   ` (3 subsequent siblings)
  13 siblings, 2 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:58 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
for host-side support for SEV live migration. Also add a new custom
MSR_KVM_SEV_LIVE_MIGRATION for guest to enable the SEV live migration
feature.

MSR is handled by userspace using MSR filters.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Reviewed-by: Steve Rutherford <srutherford@google.com>
---
 Documentation/virt/kvm/cpuid.rst     |  5 +++++
 Documentation/virt/kvm/msr.rst       | 12 ++++++++++++
 arch/x86/include/uapi/asm/kvm_para.h |  4 ++++
 arch/x86/kvm/cpuid.c                 |  3 ++-
 4 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
index cf62162d4be2..0bdb6cdb12d3 100644
--- a/Documentation/virt/kvm/cpuid.rst
+++ b/Documentation/virt/kvm/cpuid.rst
@@ -96,6 +96,11 @@ KVM_FEATURE_MSI_EXT_DEST_ID        15          guest checks this feature bit
                                                before using extended destination
                                                ID bits in MSI address bits 11-5.
 
+KVM_FEATURE_SEV_LIVE_MIGRATION     16          guest checks this feature bit before
+                                               using the page encryption state
+                                               hypercall to notify the page state
+                                               change
+
 KVM_FEATURE_CLOCKSOURCE_STABLE_BIT 24          host will warn if no guest-side
                                                per-cpu warps are expected in
                                                kvmclock
diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
index e37a14c323d2..020245d16087 100644
--- a/Documentation/virt/kvm/msr.rst
+++ b/Documentation/virt/kvm/msr.rst
@@ -376,3 +376,15 @@ data:
 	write '1' to bit 0 of the MSR, this causes the host to re-scan its queue
 	and check if there are more notifications pending. The MSR is available
 	if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.
+
+MSR_KVM_SEV_LIVE_MIGRATION:
+        0x4b564d08
+
+	Control SEV Live Migration features.
+
+data:
+        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature,
+        in other words, this is guest->host communication that it's properly
+        handling the shared pages list.
+
+        All other bits are reserved.
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 950afebfba88..f6bfa138874f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -33,6 +33,7 @@
 #define KVM_FEATURE_PV_SCHED_YIELD	13
 #define KVM_FEATURE_ASYNC_PF_INT	14
 #define KVM_FEATURE_MSI_EXT_DEST_ID	15
+#define KVM_FEATURE_SEV_LIVE_MIGRATION	16
 
 #define KVM_HINTS_REALTIME      0
 
@@ -54,6 +55,7 @@
 #define MSR_KVM_POLL_CONTROL	0x4b564d05
 #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
 #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
+#define MSR_KVM_SEV_LIVE_MIGRATION	0x4b564d08
 
 struct kvm_steal_time {
 	__u64 steal;
@@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#define KVM_SEV_LIVE_MIGRATION_ENABLED BIT_ULL(0)
+
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 6bd2f8b830e4..4e2e69a692aa 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -812,7 +812,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			     (1 << KVM_FEATURE_PV_SEND_IPI) |
 			     (1 << KVM_FEATURE_POLL_CONTROL) |
 			     (1 << KVM_FEATURE_PV_SCHED_YIELD) |
-			     (1 << KVM_FEATURE_ASYNC_PF_INT);
+			     (1 << KVM_FEATURE_ASYNC_PF_INT) |
+			     (1 << KVM_FEATURE_SEV_LIVE_MIGRATION);
 
 		if (sched_info_on())
 			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 11/12] EFI: Introduce the new AMD Memory Encryption GUID.
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (9 preceding siblings ...)
  2021-04-15 15:58 ` [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
@ 2021-04-15 15:58 ` Ashish Kalra
  2021-04-15 16:01 ` [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 15:58 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

From: Ashish Kalra <ashish.kalra@amd.com>

Introduce a new AMD Memory Encryption GUID which is currently
used for defining a new UEFI environment variable which indicates
UEFI/OVMF support for the SEV live migration feature. This variable
is setup when UEFI/OVMF detects host/hypervisor support for SEV
live migration and later this variable is read by the kernel using
EFI runtime services to verify if OVMF supports the live migration
feature.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Reviewed-by: Steve Rutherford <srutherford@google.com>
---
 include/linux/efi.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/efi.h b/include/linux/efi.h
index 6b5d36babfcc..6f364ace82cb 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -362,6 +362,7 @@ void efi_native_runtime_setup(void);
 
 /* OEM GUIDs */
 #define DELLEMC_EFI_RCI2_TABLE_GUID		EFI_GUID(0x2d9f28a2, 0xa886, 0x456a,  0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55)
+#define MEM_ENCRYPT_GUID			EFI_GUID(0x0cf29b71, 0x9e51, 0x433a,  0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75)
 
 typedef struct {
 	efi_guid_t guid;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (10 preceding siblings ...)
  2021-04-15 15:58 ` [PATCH v13 11/12] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
@ 2021-04-15 16:01 ` Ashish Kalra
  2021-04-20 10:52   ` Paolo Bonzini
  2021-04-21 14:44   ` Borislav Petkov
  2021-04-16 21:43 ` [PATCH v13 00/12] Add AMD SEV guest live migration support Steve Rutherford
  2021-04-20 11:11 ` Paolo Bonzini
  13 siblings, 2 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-15 16:01 UTC (permalink / raw)
  To: pbonzini
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh,
	kexec

From: Ashish Kalra <ashish.kalra@amd.com>

The guest support for detecting and enabling SEV Live migration
feature uses the following logic :

 - kvm_init_plaform() invokes check_kvm_sev_migration() which
   checks if its booted under the EFI

   - If not EFI,

     i) check for the KVM_FEATURE_CPUID

     ii) if CPUID reports that migration is supported, issue a wrmsrl()
         to enable the SEV live migration support

   - If EFI,

     i) check for the KVM_FEATURE_CPUID

     ii) If CPUID reports that migration is supported, read the UEFI variable which
         indicates OVMF support for live migration

     iii) the variable indicates live migration is supported, issue a wrmsrl() to
          enable the SEV live migration support

The EFI live migration check is done using a late_initcall() callback.

Also, ensure that _bss_decrypted section is marked as decrypted in the
shared pages list.

Also adds kexec support for SEV Live Migration.

Reset the host's shared pages list related to kernel
specific page encryption status settings before we load a
new kernel by kexec. We cannot reset the complete
shared pages list here as we need to retain the
UEFI/OVMF firmware specific settings.

The host's shared pages list is maintained for the
guest to keep track of all unencrypted guest memory regions,
therefore we need to explicitly mark all shared pages as
encrypted again before rebooting into the new guest kernel.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |  8 ++++
 arch/x86/kernel/kvm.c              | 55 +++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c          | 64 ++++++++++++++++++++++++++++++
 3 files changed, 127 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 31c4df123aa0..19b77f3a62dc 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -21,6 +21,7 @@
 extern u64 sme_me_mask;
 extern u64 sev_status;
 extern bool sev_enabled;
+extern bool sev_live_migration_enabled;
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
@@ -44,8 +45,11 @@ void __init sme_enable(struct boot_params *bp);
 
 int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
 int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
+void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
+					    bool enc);
 
 void __init mem_encrypt_free_decrypted_mem(void);
+void __init check_kvm_sev_migration(void);
 
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void);
@@ -60,6 +64,7 @@ bool sev_es_active(void);
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
 #define sme_me_mask	0ULL
+#define sev_live_migration_enabled	false
 
 static inline void __init sme_early_encrypt(resource_size_t paddr,
 					    unsigned long size) { }
@@ -84,8 +89,11 @@ static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
 static inline int __init
 early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
+static inline void __init
+early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
 
 static inline void mem_encrypt_free_decrypted_mem(void) { }
+static inline void check_kvm_sev_migration(void) { }
 
 #define __bss_decrypted
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 78bb0fae3982..94ef16d263a7 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -26,6 +26,7 @@
 #include <linux/kprobes.h>
 #include <linux/nmi.h>
 #include <linux/swait.h>
+#include <linux/efi.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
 	early_set_memory_decrypted((unsigned long) ptr, size);
 }
 
+static int __init setup_kvm_sev_migration(void)
+{
+	efi_char16_t efi_sev_live_migration_enabled[] = L"SevLiveMigrationEnabled";
+	efi_guid_t efi_variable_guid = MEM_ENCRYPT_GUID;
+	efi_status_t status;
+	unsigned long size;
+	bool enabled;
+
+	/*
+	 * check_kvm_sev_migration() invoked via kvm_init_platform() before
+	 * this callback would have setup the indicator that live migration
+	 * feature is supported/enabled.
+	 */
+	if (!sev_live_migration_enabled)
+		return 0;
+
+	if (!efi_enabled(EFI_BOOT))
+		return 0;
+
+	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
+		pr_info("%s : EFI runtime services are not enabled\n", __func__);
+		return 0;
+	}
+
+	size = sizeof(enabled);
+
+	/* Get variable contents into buffer */
+	status = efi.get_variable(efi_sev_live_migration_enabled,
+				  &efi_variable_guid, NULL, &size, &enabled);
+
+	if (status == EFI_NOT_FOUND) {
+		pr_info("%s : EFI live migration variable not found\n", __func__);
+		return 0;
+	}
+
+	if (status != EFI_SUCCESS) {
+		pr_info("%s : EFI variable retrieval failed\n", __func__);
+		return 0;
+	}
+
+	if (enabled == 0) {
+		pr_info("%s: live migration disabled in EFI\n", __func__);
+		return 0;
+	}
+
+	pr_info("%s : live migration enabled in EFI\n", __func__);
+	wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION, KVM_SEV_LIVE_MIGRATION_ENABLED);
+
+	return true;
+}
+
+late_initcall(setup_kvm_sev_migration);
+
 /*
  * Iterate through all possible CPUs and map the memory region pointed
  * by apf_reason, steal_time and kvm_apic_eoi as decrypted at once.
@@ -747,6 +801,7 @@ static bool __init kvm_msi_ext_dest_id(void)
 
 static void __init kvm_init_platform(void)
 {
+	check_kvm_sev_migration();
 	kvmclock_init();
 	x86_platform.apic_post_init = kvm_apic_init;
 }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index fae9ccbd0da7..382d1d4f00f5 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -20,6 +20,7 @@
 #include <linux/bitops.h>
 #include <linux/dma-mapping.h>
 #include <linux/kvm_para.h>
+#include <linux/efi.h>
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
@@ -31,6 +32,7 @@
 #include <asm/msr.h>
 #include <asm/cmdline.h>
 #include <asm/kvm_para.h>
+#include <asm/e820/api.h>
 
 #include "mm_internal.h"
 
@@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
 
 bool sev_enabled __section(".data");
 
+bool sev_live_migration_enabled __section(".data");
+
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
 
@@ -237,6 +241,9 @@ static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
 	unsigned long sz = npages << PAGE_SHIFT;
 	unsigned long vaddr_end, vaddr_next;
 
+	if (!sev_live_migration_enabled)
+		return;
+
 	vaddr_end = vaddr + sz;
 
 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
@@ -407,6 +414,12 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
 	return early_set_memory_enc_dec(vaddr, size, true);
 }
 
+void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
+					bool enc)
+{
+	set_memory_enc_dec_hypercall(vaddr, npages, enc);
+}
+
 /*
  * SME and SEV are very similar but they are not the same, so there are
  * times that the kernel will need to distinguish between SME and SEV. The
@@ -462,6 +475,57 @@ bool force_dma_unencrypted(struct device *dev)
 	return false;
 }
 
+void __init check_kvm_sev_migration(void)
+{
+	if (sev_active() &&
+	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
+		unsigned long nr_pages;
+		int i;
+
+		pr_info("KVM enable live migration\n");
+		WRITE_ONCE(sev_live_migration_enabled, true);
+
+		/*
+		 * Reset the host's shared pages list related to kernel
+		 * specific page encryption status settings before we load a
+		 * new kernel by kexec. Reset the page encryption status
+		 * during early boot intead of just before kexec to avoid SMP
+		 * races during kvm_pv_guest_cpu_reboot().
+		 * NOTE: We cannot reset the complete shared pages list
+		 * here as we need to retain the UEFI/OVMF firmware
+		 * specific settings.
+		 */
+
+		for (i = 0; i < e820_table->nr_entries; i++) {
+			struct e820_entry *entry = &e820_table->entries[i];
+
+			if (entry->type != E820_TYPE_RAM)
+				continue;
+
+			nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
+
+			kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS, entry->addr,
+					   nr_pages, 1);
+		}
+
+		/*
+		 * Ensure that _bss_decrypted section is marked as decrypted in the
+		 * shared pages list.
+		 */
+		nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted,
+					PAGE_SIZE);
+		early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted,
+						nr_pages, 0);
+
+		/*
+		 * If not booted using EFI, enable Live migration support.
+		 */
+		if (!efi_enabled(EFI_BOOT))
+			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
+			       KVM_SEV_LIVE_MIGRATION_ENABLED);
+	}
+}
+
 void __init mem_encrypt_free_decrypted_mem(void)
 {
 	unsigned long vaddr, vaddr_end, npages;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 00/12] Add AMD SEV guest live migration support
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (11 preceding siblings ...)
  2021-04-15 16:01 ` [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
@ 2021-04-16 21:43 ` Steve Rutherford
  2021-04-19 14:40   ` Ashish Kalra
  2021-04-20 11:11 ` Paolo Bonzini
  13 siblings, 1 reply; 43+ messages in thread
From: Steve Rutherford @ 2021-04-16 21:43 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, Sean Christopherson, Venu Busireddy, Brijesh Singh

On Thu, Apr 15, 2021 at 8:52 AM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> The series add support for AMD SEV guest live migration commands. To protect the
> confidentiality of an SEV protected guest memory while in transit we need to
> use the SEV commands defined in SEV API spec [1].
>
> SEV guest VMs have the concept of private and shared memory. Private memory
> is encrypted with the guest-specific key, while shared memory may be encrypted
> with hypervisor key. The commands provided by the SEV FW are meant to be used
> for the private memory only. The patch series introduces a new hypercall.
> The guest OS can use this hypercall to notify the page encryption status.
> If the page is encrypted with guest specific-key then we use SEV command during
> the migration. If page is not encrypted then fallback to default.
>
> The patch uses the KVM_EXIT_HYPERCALL exitcode and hypercall to
> userspace exit functionality as a common interface from the guest back to the
> VMM and passing on the guest shared/unencrypted page information to the
> userspace VMM/Qemu. Qemu can consult this information during migration to know
> whether the page is encrypted.
>
> This section descibes how the SEV live migration feature is negotiated
> between the host and guest, the host indicates this feature support via
> KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> sets a UEFI enviroment variable indicating OVMF support for live
> migration, the guest kernel also detects the host support for this
> feature via cpuid and in case of an EFI boot verifies if OVMF also
> supports this feature by getting the UEFI enviroment variable and if it
> set then enables live migration feature on host by writing to a custom
> MSR, if not booted under EFI, then it simply enables the feature by
> again writing to the custom MSR. The MSR is also handled by the
> userspace VMM/Qemu.
>
> A branch containing these patches is available here:
> https://github.com/AMDESE/linux/tree/sev-migration-v13
>
> [1] https://developer.amd.com/wp-content/resources/55766.PDF
>
> Changes since v12:
> - Reset page encryption status during early boot instead of just
>   before the kexec to avoid SMP races during kvm_pv_guest_cpu_reboot().

Does this series need to disable the MSR during kvm_pv_guest_cpu_reboot()?

I _think_ going into blackout during the window after restart, but
before the MSR is explicitly reenabled, would cause corruption. The
historical shared pages could be re-allocated as non-shared pages
during restart.

Steve

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 00/12] Add AMD SEV guest live migration support
  2021-04-16 21:43 ` [PATCH v13 00/12] Add AMD SEV guest live migration support Steve Rutherford
@ 2021-04-19 14:40   ` Ashish Kalra
  0 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-19 14:40 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, X86 ML, KVM list,
	LKML, Sean Christopherson, Venu Busireddy, Brijesh Singh

On Fri, Apr 16, 2021 at 02:43:48PM -0700, Steve Rutherford wrote:
> On Thu, Apr 15, 2021 at 8:52 AM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
> >
> > From: Ashish Kalra <ashish.kalra@amd.com>
> >
> > The series add support for AMD SEV guest live migration commands. To protect the
> > confidentiality of an SEV protected guest memory while in transit we need to
> > use the SEV commands defined in SEV API spec [1].
> >
> > SEV guest VMs have the concept of private and shared memory. Private memory
> > is encrypted with the guest-specific key, while shared memory may be encrypted
> > with hypervisor key. The commands provided by the SEV FW are meant to be used
> > for the private memory only. The patch series introduces a new hypercall.
> > The guest OS can use this hypercall to notify the page encryption status.
> > If the page is encrypted with guest specific-key then we use SEV command during
> > the migration. If page is not encrypted then fallback to default.
> >
> > The patch uses the KVM_EXIT_HYPERCALL exitcode and hypercall to
> > userspace exit functionality as a common interface from the guest back to the
> > VMM and passing on the guest shared/unencrypted page information to the
> > userspace VMM/Qemu. Qemu can consult this information during migration to know
> > whether the page is encrypted.
> >
> > This section descibes how the SEV live migration feature is negotiated
> > between the host and guest, the host indicates this feature support via
> > KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> > sets a UEFI enviroment variable indicating OVMF support for live
> > migration, the guest kernel also detects the host support for this
> > feature via cpuid and in case of an EFI boot verifies if OVMF also
> > supports this feature by getting the UEFI enviroment variable and if it
> > set then enables live migration feature on host by writing to a custom
> > MSR, if not booted under EFI, then it simply enables the feature by
> > again writing to the custom MSR. The MSR is also handled by the
> > userspace VMM/Qemu.
> >
> > A branch containing these patches is available here:
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2Flinux%2Ftree%2Fsev-migration-v13&amp;data=04%7C01%7CAshish.Kalra%40amd.com%7C7bee6d5c907b46d0998508d90120ce2d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637542063133830260%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=FkKrciL41GDNyNDqrPMVblRa%2FaReogW4OzhbYaSYs04%3D&amp;reserved=0
> >
> > [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeveloper.amd.com%2Fwp-content%2Fresources%2F55766.PDF&amp;data=04%7C01%7CAshish.Kalra%40amd.com%7C7bee6d5c907b46d0998508d90120ce2d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637542063133830260%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=%2FLFBR9ean0acMmR8WTLUHZsAynYPRAa7%2FeZEVVdpCo8%3D&amp;reserved=0
> >
> > Changes since v12:
> > - Reset page encryption status during early boot instead of just
> >   before the kexec to avoid SMP races during kvm_pv_guest_cpu_reboot().
> 
> Does this series need to disable the MSR during kvm_pv_guest_cpu_reboot()?
> 

Yes, i think that will make sense, it will be similar to the first time
VM boot where the MSR will be disabled till it is enabled at early
kernel boot. I will add this to the current patch series.

Thanks,
Ashish

> I _think_ going into blackout during the window after restart, but
> before the MSR is explicitly reenabled, would cause corruption. The
> historical shared pages could be re-allocated as non-shared pages
> during restart.
> 
> Steve

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2021-04-15 15:58 ` [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
@ 2021-04-19 23:06   ` Sean Christopherson
  2021-04-20 10:49     ` Paolo Bonzini
  2021-04-20  9:47   ` Paolo Bonzini
  1 sibling, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2021-04-19 23:06 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, venu.busireddy, brijesh.singh

On Thu, Apr 15, 2021, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> for host-side support for SEV live migration. Also add a new custom
> MSR_KVM_SEV_LIVE_MIGRATION for guest to enable the SEV live migration
> feature.
> 
> MSR is handled by userspace using MSR filters.
> 
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> ---
>  Documentation/virt/kvm/cpuid.rst     |  5 +++++
>  Documentation/virt/kvm/msr.rst       | 12 ++++++++++++
>  arch/x86/include/uapi/asm/kvm_para.h |  4 ++++
>  arch/x86/kvm/cpuid.c                 |  3 ++-
>  4 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
> index cf62162d4be2..0bdb6cdb12d3 100644
> --- a/Documentation/virt/kvm/cpuid.rst
> +++ b/Documentation/virt/kvm/cpuid.rst
> @@ -96,6 +96,11 @@ KVM_FEATURE_MSI_EXT_DEST_ID        15          guest checks this feature bit
>                                                 before using extended destination
>                                                 ID bits in MSI address bits 11-5.
>  
> +KVM_FEATURE_SEV_LIVE_MIGRATION     16          guest checks this feature bit before
> +                                               using the page encryption state
> +                                               hypercall to notify the page state
> +                                               change

Hrm, I think there are two separate things being intertwined: the hypercall to
communicate private/shared pages, and the MSR to control live migration.  More
thoughts below.

>  KVM_FEATURE_CLOCKSOURCE_STABLE_BIT 24          host will warn if no guest-side
>                                                 per-cpu warps are expected in
>                                                 kvmclock
> diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
> index e37a14c323d2..020245d16087 100644
> --- a/Documentation/virt/kvm/msr.rst
> +++ b/Documentation/virt/kvm/msr.rst
> @@ -376,3 +376,15 @@ data:
>  	write '1' to bit 0 of the MSR, this causes the host to re-scan its queue
>  	and check if there are more notifications pending. The MSR is available
>  	if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.
> +
> +MSR_KVM_SEV_LIVE_MIGRATION:
> +        0x4b564d08
> +
> +	Control SEV Live Migration features.
> +
> +data:
> +        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature,
> +        in other words, this is guest->host communication that it's properly
> +        handling the shared pages list.
> +
> +        All other bits are reserved.
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
> index 950afebfba88..f6bfa138874f 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -33,6 +33,7 @@
>  #define KVM_FEATURE_PV_SCHED_YIELD	13
>  #define KVM_FEATURE_ASYNC_PF_INT	14
>  #define KVM_FEATURE_MSI_EXT_DEST_ID	15
> +#define KVM_FEATURE_SEV_LIVE_MIGRATION	16
>  
>  #define KVM_HINTS_REALTIME      0
>  
> @@ -54,6 +55,7 @@
>  #define MSR_KVM_POLL_CONTROL	0x4b564d05
>  #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
>  #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
> +#define MSR_KVM_SEV_LIVE_MIGRATION	0x4b564d08
>  
>  struct kvm_steal_time {
>  	__u64 steal;
> @@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
>  #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
>  #define KVM_PV_EOI_DISABLED 0x0
>  
> +#define KVM_SEV_LIVE_MIGRATION_ENABLED BIT_ULL(0)

Even though the intent is to "force" userspace to intercept the MSR, I think KVM
should at least emulate the legal bits as a nop.  Deferring completely to
userspace is rather bizarre as there's not really anything to justify KVM
getting involved.  It would also force userspace to filter the MSR just to
support the hypercall.

Somewhat of a nit, but I think we should do something like s/ENABLED/READY,
or maybe s/ENABLED/SAFE, in the bit name so that the semantics are more along
the lines of an announcement from the guest, as opposed to a command.  Treating
the bit as a hint/announcement makes it easier to bundle the hypercall and the
MSR together under a single feature, e.g. it's slightly more obvious that
userspace can ignore the MSR if it knows its use case doesn't need migration or
that it can't migrate its guest at will.

I also think we should drop the "SEV" part, especially since it sounds like the
feature flag also enumerates that the hypercall is available.

E.g. for the WRMSR side

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index eca63625ae..10f90f8491 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3229,6 +3229,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)

                vcpu->arch.msr_kvm_poll_control = data;
                break;
+       case MSR_KVM_LIVE_MIGRATION_CONTROL:
+               if (!guest_pv_has(vcpu, KVM_FEATURE_LIVE_MIGRATION_CONTROL))
+                       return 1;
+
+               if (data & ~KVM_LIVE_MIGRATION_READY)
+                       return 1;
+               break;

        case MSR_IA32_MCG_CTL:
        case MSR_IA32_MCG_STATUS:


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  2021-04-15 15:54 ` [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
@ 2021-04-20  8:38   ` Paolo Bonzini
  2021-04-20  9:18     ` Paolo Bonzini
  0 siblings, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  8:38 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:54, Ashish Kalra wrote:
> +	}
> +
> +	sev->handle = start->handle;
> +	sev->fd = argp->sev_fd;

These two lines are spurious, I'll delete them.

Paolo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  2021-04-15 15:55 ` [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
@ 2021-04-20  8:40   ` Paolo Bonzini
  2021-04-20  8:43     ` Paolo Bonzini
  0 siblings, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  8:40 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:55, Ashish Kalra wrote:
> +	if (!guest_page)
> +		goto e_free;
> +

Missing unpin on error (but it won't be needed with Sean's patches that 
move the data block to the stack, so I can fix this too).

Paolo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  2021-04-20  8:40   ` Paolo Bonzini
@ 2021-04-20  8:43     ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  8:43 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 20/04/21 10:40, Paolo Bonzini wrote:
> On 15/04/21 17:55, Ashish Kalra wrote:
>> +    if (!guest_page)
>> +        goto e_free;
>> +
> 
> Missing unpin on error (but it won't be needed with Sean's patches that 
> move the data block to the stack, so I can fix this too).

No, sorry---the initialization order is different between 
send_update_data and receive_update_data, so it's okay.

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command
  2021-04-15 15:53 ` [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
@ 2021-04-20  8:50   ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  8:50 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:53, Ashish Kalra wrote:
> From: Brijesh Singh <brijesh.singh@amd.com>
> 
> The command is used to create an outgoing SEV guest encryption context.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>   .../virt/kvm/amd-memory-encryption.rst        |  27 ++++
>   arch/x86/kvm/svm/sev.c                        | 125 ++++++++++++++++++
>   include/linux/psp-sev.h                       |   8 +-
>   include/uapi/linux/kvm.h                      |  12 ++
>   4 files changed, 168 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
> index 469a6308765b..ac799dd7a618 100644
> --- a/Documentation/virt/kvm/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/amd-memory-encryption.rst
> @@ -284,6 +284,33 @@ Returns: 0 on success, -negative on error
>                   __u32 len;
>           };
>   
> +10. KVM_SEV_SEND_START
> +----------------------
> +
> +The KVM_SEV_SEND_START command can be used by the hypervisor to create an
> +outgoing guest encryption context.
> +
> +Parameters (in): struct kvm_sev_send_start
> +
> +Returns: 0 on success, -negative on error
> +
> +::
> +        struct kvm_sev_send_start {
> +                __u32 policy;                 /* guest policy */
> +
> +                __u64 pdh_cert_uaddr;         /* platform Diffie-Hellman certificate */
> +                __u32 pdh_cert_len;
> +
> +                __u64 plat_certs_uaddr;        /* platform certificate chain */
> +                __u32 plat_certs_len;
> +
> +                __u64 amd_certs_uaddr;        /* AMD certificate */
> +                __u32 amd_certs_len;
> +
> +                __u64 session_uaddr;          /* Guest session information */
> +                __u32 session_len;
> +        };
> +
>   References
>   ==========
>   
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 874ea309279f..2b65900c05d6 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1110,6 +1110,128 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
>   	return ret;
>   }
>   
> +/* Userspace wants to query session length. */
> +static int
> +__sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
> +				      struct kvm_sev_send_start *params)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_data_send_start *data;
> +	int ret;
> +
> +	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
> +	if (data == NULL)
> +		return -ENOMEM;
> +
> +	data->handle = sev->handle;
> +	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, data, &argp->error);

This is missing an "if (ret < 0)" (and this time I'm pretty sure it's 
indeed the case :)), otherwise you miss for example the EBADF return 
code if the SEV file descriptor is closed or reused.  Same for 
KVM_SEND_UPDATE_DATA.  Also, the length==0 case is not documented.

Paolo

> +	params->session_len = data->session_len;
> +	if (copy_to_user((void __user *)(uintptr_t)argp->data, params,
> +				sizeof(struct kvm_sev_send_start)))
> +		ret = -EFAULT;
> +
> +	kfree(data);
> +	return ret;
> +}
> +
> +static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +	struct sev_data_send_start *data;
> +	struct kvm_sev_send_start params;
> +	void *amd_certs, *session_data;
> +	void *pdh_cert, *plat_certs;
> +	int ret;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data,
> +				sizeof(struct kvm_sev_send_start)))
> +		return -EFAULT;
> +
> +	/* if session_len is zero, userspace wants to query the session length */
> +	if (!params.session_len)
> +		return __sev_send_start_query_session_length(kvm, argp,
> +				&params);
> +
> +	/* some sanity checks */
> +	if (!params.pdh_cert_uaddr || !params.pdh_cert_len ||
> +	    !params.session_uaddr || params.session_len > SEV_FW_BLOB_MAX_SIZE)
> +		return -EINVAL;
> +
> +	/* allocate the memory to hold the session data blob */
> +	session_data = kmalloc(params.session_len, GFP_KERNEL_ACCOUNT);
> +	if (!session_data)
> +		return -ENOMEM;
> +
> +	/* copy the certificate blobs from userspace */
> +	pdh_cert = psp_copy_user_blob(params.pdh_cert_uaddr,
> +				params.pdh_cert_len);
> +	if (IS_ERR(pdh_cert)) {
> +		ret = PTR_ERR(pdh_cert);
> +		goto e_free_session;
> +	}
> +
> +	plat_certs = psp_copy_user_blob(params.plat_certs_uaddr,
> +				params.plat_certs_len);
> +	if (IS_ERR(plat_certs)) {
> +		ret = PTR_ERR(plat_certs);
> +		goto e_free_pdh;
> +	}
> +
> +	amd_certs = psp_copy_user_blob(params.amd_certs_uaddr,
> +				params.amd_certs_len);
> +	if (IS_ERR(amd_certs)) {
> +		ret = PTR_ERR(amd_certs);
> +		goto e_free_plat_cert;
> +	}
> +
> +	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
> +	if (data == NULL) {
> +		ret = -ENOMEM;
> +		goto e_free_amd_cert;
> +	}
> +
> +	/* populate the FW SEND_START field with system physical address */
> +	data->pdh_cert_address = __psp_pa(pdh_cert);
> +	data->pdh_cert_len = params.pdh_cert_len;
> +	data->plat_certs_address = __psp_pa(plat_certs);
> +	data->plat_certs_len = params.plat_certs_len;
> +	data->amd_certs_address = __psp_pa(amd_certs);
> +	data->amd_certs_len = params.amd_certs_len;
> +	data->session_address = __psp_pa(session_data);
> +	data->session_len = params.session_len;
> +	data->handle = sev->handle;
> +
> +	ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, data, &argp->error);
> +
> +	if (!ret && copy_to_user((void __user *)(uintptr_t)params.session_uaddr,
> +			session_data, params.session_len)) {
> +		ret = -EFAULT;
> +		goto e_free;
> +	}
> +
> +	params.policy = data->policy;
> +	params.session_len = data->session_len;
> +	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params,
> +				sizeof(struct kvm_sev_send_start)))
> +		ret = -EFAULT;
> +
> +e_free:
> +	kfree(data);
> +e_free_amd_cert:
> +	kfree(amd_certs);
> +e_free_plat_cert:
> +	kfree(plat_certs);
> +e_free_pdh:
> +	kfree(pdh_cert);
> +e_free_session:
> +	kfree(session_data);
> +	return ret;
> +}
> +
>   int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>   {
>   	struct kvm_sev_cmd sev_cmd;
> @@ -1163,6 +1285,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>   	case KVM_SEV_GET_ATTESTATION_REPORT:
>   		r = sev_get_attestation_report(kvm, &sev_cmd);
>   		break;
> +	case KVM_SEV_SEND_START:
> +		r = sev_send_start(kvm, &sev_cmd);
> +		break;
>   	default:
>   		r = -EINVAL;
>   		goto out;
> diff --git a/include/linux/psp-sev.h b/include/linux/psp-sev.h
> index b801ead1e2bb..73da511b9423 100644
> --- a/include/linux/psp-sev.h
> +++ b/include/linux/psp-sev.h
> @@ -326,11 +326,11 @@ struct sev_data_send_start {
>   	u64 pdh_cert_address;			/* In */
>   	u32 pdh_cert_len;			/* In */
>   	u32 reserved1;
> -	u64 plat_cert_address;			/* In */
> -	u32 plat_cert_len;			/* In */
> +	u64 plat_certs_address;			/* In */
> +	u32 plat_certs_len;			/* In */
>   	u32 reserved2;
> -	u64 amd_cert_address;			/* In */
> -	u32 amd_cert_len;			/* In */
> +	u64 amd_certs_address;			/* In */
> +	u32 amd_certs_len;			/* In */
>   	u32 reserved3;
>   	u64 session_address;			/* In */
>   	u32 session_len;			/* In/Out */
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index f6afee209620..ac53ad2e7271 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1729,6 +1729,18 @@ struct kvm_sev_attestation_report {
>   	__u32 len;
>   };
>   
> +struct kvm_sev_send_start {
> +	__u32 policy;
> +	__u64 pdh_cert_uaddr;
> +	__u32 pdh_cert_len;
> +	__u64 plat_certs_uaddr;
> +	__u32 plat_certs_len;
> +	__u64 amd_certs_uaddr;
> +	__u32 amd_certs_len;
> +	__u64 session_uaddr;
> +	__u32 session_len;
> +};
> +
>   #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
>   #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
>   #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  2021-04-20  8:38   ` Paolo Bonzini
@ 2021-04-20  9:18     ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  9:18 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 20/04/21 10:38, Paolo Bonzini wrote:
> On 15/04/21 17:54, Ashish Kalra wrote:
>> +    }
>> +
>> +    sev->handle = start->handle;
>> +    sev->fd = argp->sev_fd;
> 
> These two lines are spurious, I'll delete them.

And this is wrong as well.  My apologies.

Paolo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-15 15:57 ` [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
@ 2021-04-20  9:39   ` Paolo Bonzini
  2021-04-21 10:05   ` Borislav Petkov
  1 sibling, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  9:39 UTC (permalink / raw)
  To: Ashish Kalra, bp
  Cc: tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm, linux-kernel,
	srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:57, Ashish Kalra wrote:
> From: Brijesh Singh <brijesh.singh@amd.com>
> 
> Invoke a hypercall when a memory region is changed from encrypted ->
> decrypted and vice versa. Hypervisor needs to know the page encryption
> status during the guest migration.

Boris, can you ack this patch?

Paolo

> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>   arch/x86/include/asm/paravirt.h       | 10 +++++
>   arch/x86/include/asm/paravirt_types.h |  2 +
>   arch/x86/kernel/paravirt.c            |  1 +
>   arch/x86/mm/mem_encrypt.c             | 57 ++++++++++++++++++++++++++-
>   arch/x86/mm/pat/set_memory.c          |  7 ++++
>   5 files changed, 76 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 4abf110e2243..efaa3e628967 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -84,6 +84,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
>   	PVOP_VCALL1(mmu.exit_mmap, mm);
>   }
>   
> +static inline void page_encryption_changed(unsigned long vaddr, int npages,
> +						bool enc)
> +{
> +	PVOP_VCALL3(mmu.page_encryption_changed, vaddr, npages, enc);
> +}
> +
>   #ifdef CONFIG_PARAVIRT_XXL
>   static inline void load_sp0(unsigned long sp0)
>   {
> @@ -799,6 +805,10 @@ static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
>   static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
>   {
>   }
> +
> +static inline void page_encryption_changed(unsigned long vaddr, int npages, bool enc)
> +{
> +}
>   #endif
>   #endif /* __ASSEMBLY__ */
>   #endif /* _ASM_X86_PARAVIRT_H */
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index de87087d3bde..69ef9c207b38 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -195,6 +195,8 @@ struct pv_mmu_ops {
>   
>   	/* Hook for intercepting the destruction of an mm_struct. */
>   	void (*exit_mmap)(struct mm_struct *mm);
> +	void (*page_encryption_changed)(unsigned long vaddr, int npages,
> +					bool enc);
>   
>   #ifdef CONFIG_PARAVIRT_XXL
>   	struct paravirt_callee_save read_cr2;
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index c60222ab8ab9..9f206e192f6b 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -335,6 +335,7 @@ struct paravirt_patch_template pv_ops = {
>   			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
>   
>   	.mmu.exit_mmap		= paravirt_nop,
> +	.mmu.page_encryption_changed	= paravirt_nop,
>   
>   #ifdef CONFIG_PARAVIRT_XXL
>   	.mmu.read_cr2		= __PV_IS_CALLEE_SAVE(native_read_cr2),
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index ae78cef79980..fae9ccbd0da7 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -19,6 +19,7 @@
>   #include <linux/kernel.h>
>   #include <linux/bitops.h>
>   #include <linux/dma-mapping.h>
> +#include <linux/kvm_para.h>
>   
>   #include <asm/tlbflush.h>
>   #include <asm/fixmap.h>
> @@ -29,6 +30,7 @@
>   #include <asm/processor-flags.h>
>   #include <asm/msr.h>
>   #include <asm/cmdline.h>
> +#include <asm/kvm_para.h>
>   
>   #include "mm_internal.h"
>   
> @@ -229,6 +231,47 @@ void __init sev_setup_arch(void)
>   	swiotlb_adjust_size(size);
>   }
>   
> +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
> +					bool enc)
> +{
> +	unsigned long sz = npages << PAGE_SHIFT;
> +	unsigned long vaddr_end, vaddr_next;
> +
> +	vaddr_end = vaddr + sz;
> +
> +	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> +		int psize, pmask, level;
> +		unsigned long pfn;
> +		pte_t *kpte;
> +
> +		kpte = lookup_address(vaddr, &level);
> +		if (!kpte || pte_none(*kpte))
> +			return;
> +
> +		switch (level) {
> +		case PG_LEVEL_4K:
> +			pfn = pte_pfn(*kpte);
> +			break;
> +		case PG_LEVEL_2M:
> +			pfn = pmd_pfn(*(pmd_t *)kpte);
> +			break;
> +		case PG_LEVEL_1G:
> +			pfn = pud_pfn(*(pud_t *)kpte);
> +			break;
> +		default:
> +			return;
> +		}
> +
> +		psize = page_level_size(level);
> +		pmask = page_level_mask(level);
> +
> +		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
> +				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
> +
> +		vaddr_next = (vaddr & pmask) + psize;
> +	}
> +}
> +
>   static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>   {
>   	pgprot_t old_prot, new_prot;
> @@ -286,12 +329,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>   static int __init early_set_memory_enc_dec(unsigned long vaddr,
>   					   unsigned long size, bool enc)
>   {
> -	unsigned long vaddr_end, vaddr_next;
> +	unsigned long vaddr_end, vaddr_next, start;
>   	unsigned long psize, pmask;
>   	int split_page_size_mask;
>   	int level, ret;
>   	pte_t *kpte;
>   
> +	start = vaddr;
>   	vaddr_next = vaddr;
>   	vaddr_end = vaddr + size;
>   
> @@ -346,6 +390,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
>   
>   	ret = 0;
>   
> +	set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
> +					enc);
>   out:
>   	__flush_tlb_all();
>   	return ret;
> @@ -481,6 +527,15 @@ void __init mem_encrypt_init(void)
>   	if (sev_active() && !sev_es_active())
>   		static_branch_enable(&sev_enable_key);
>   
> +#ifdef CONFIG_PARAVIRT
> +	/*
> +	 * With SEV, we need to make a hypercall when page encryption state is
> +	 * changed.
> +	 */
> +	if (sev_active())
> +		pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
> +#endif
> +
>   	print_mem_encrypt_feature_info();
>   }
>   
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 16f878c26667..3576b583ac65 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -27,6 +27,7 @@
>   #include <asm/proto.h>
>   #include <asm/memtype.h>
>   #include <asm/set_memory.h>
> +#include <asm/paravirt.h>
>   
>   #include "../mm_internal.h"
>   
> @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>   	 */
>   	cpa_flush(&cpa, 0);
>   
> +	/* Notify hypervisor that a given memory range is mapped encrypted
> +	 * or decrypted. The hypervisor will use this information during the
> +	 * VM migration.
> +	 */
> +	page_encryption_changed(addr, numpages, enc);
> +
>   	return ret;
>   }
>   
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2021-04-15 15:58 ` [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
  2021-04-19 23:06   ` Sean Christopherson
@ 2021-04-20  9:47   ` Paolo Bonzini
  1 sibling, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20  9:47 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:58, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> for host-side support for SEV live migration. Also add a new custom
> MSR_KVM_SEV_LIVE_MIGRATION for guest to enable the SEV live migration
> feature.
> 
> MSR is handled by userspace using MSR filters.
> 
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> Reviewed-by: Steve Rutherford <srutherford@google.com>

Let's leave the MSR out for now and rename the feature to 
KVM_FEATURE_HC_PAGE_ENC_STATUS.

Paolo

> ---
>   Documentation/virt/kvm/cpuid.rst     |  5 +++++
>   Documentation/virt/kvm/msr.rst       | 12 ++++++++++++
>   arch/x86/include/uapi/asm/kvm_para.h |  4 ++++
>   arch/x86/kvm/cpuid.c                 |  3 ++-
>   4 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
> index cf62162d4be2..0bdb6cdb12d3 100644
> --- a/Documentation/virt/kvm/cpuid.rst
> +++ b/Documentation/virt/kvm/cpuid.rst
> @@ -96,6 +96,11 @@ KVM_FEATURE_MSI_EXT_DEST_ID        15          guest checks this feature bit
>                                                  before using extended destination
>                                                  ID bits in MSI address bits 11-5.
>   
> +KVM_FEATURE_SEV_LIVE_MIGRATION     16          guest checks this feature bit before
> +                                               using the page encryption state
> +                                               hypercall to notify the page state
> +                                               change
> +
>   KVM_FEATURE_CLOCKSOURCE_STABLE_BIT 24          host will warn if no guest-side
>                                                  per-cpu warps are expected in
>                                                  kvmclock
> diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
> index e37a14c323d2..020245d16087 100644
> --- a/Documentation/virt/kvm/msr.rst
> +++ b/Documentation/virt/kvm/msr.rst
> @@ -376,3 +376,15 @@ data:
>   	write '1' to bit 0 of the MSR, this causes the host to re-scan its queue
>   	and check if there are more notifications pending. The MSR is available
>   	if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.
> +
> +MSR_KVM_SEV_LIVE_MIGRATION:
> +        0x4b564d08
> +
> +	Control SEV Live Migration features.
> +
> +data:
> +        Bit 0 enables (1) or disables (0) host-side SEV Live Migration feature,
> +        in other words, this is guest->host communication that it's properly
> +        handling the shared pages list.
> +
> +        All other bits are reserved.
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
> index 950afebfba88..f6bfa138874f 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -33,6 +33,7 @@
>   #define KVM_FEATURE_PV_SCHED_YIELD	13
>   #define KVM_FEATURE_ASYNC_PF_INT	14
>   #define KVM_FEATURE_MSI_EXT_DEST_ID	15
> +#define KVM_FEATURE_SEV_LIVE_MIGRATION	16
>   
>   #define KVM_HINTS_REALTIME      0
>   
> @@ -54,6 +55,7 @@
>   #define MSR_KVM_POLL_CONTROL	0x4b564d05
>   #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
>   #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
> +#define MSR_KVM_SEV_LIVE_MIGRATION	0x4b564d08
>   
>   struct kvm_steal_time {
>   	__u64 steal;
> @@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
>   #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
>   #define KVM_PV_EOI_DISABLED 0x0
>   
> +#define KVM_SEV_LIVE_MIGRATION_ENABLED BIT_ULL(0)
> +
>   #endif /* _UAPI_ASM_X86_KVM_PARA_H */
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 6bd2f8b830e4..4e2e69a692aa 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -812,7 +812,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>   			     (1 << KVM_FEATURE_PV_SEND_IPI) |
>   			     (1 << KVM_FEATURE_POLL_CONTROL) |
>   			     (1 << KVM_FEATURE_PV_SCHED_YIELD) |
> -			     (1 << KVM_FEATURE_ASYNC_PF_INT);
> +			     (1 << KVM_FEATURE_ASYNC_PF_INT) |
> +			     (1 << KVM_FEATURE_SEV_LIVE_MIGRATION);
>   
>   		if (sched_info_on())
>   			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.
  2021-04-19 23:06   ` Sean Christopherson
@ 2021-04-20 10:49     ` Paolo Bonzini
  0 siblings, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20 10:49 UTC (permalink / raw)
  To: Sean Christopherson, Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, venu.busireddy, brijesh.singh

On 20/04/21 01:06, Sean Christopherson wrote:
>> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
>> index 950afebfba88..f6bfa138874f 100644
>> --- a/arch/x86/include/uapi/asm/kvm_para.h
>> +++ b/arch/x86/include/uapi/asm/kvm_para.h
>> @@ -33,6 +33,7 @@
>>   #define KVM_FEATURE_PV_SCHED_YIELD	13
>>   #define KVM_FEATURE_ASYNC_PF_INT	14
>>   #define KVM_FEATURE_MSI_EXT_DEST_ID	15
>> +#define KVM_FEATURE_SEV_LIVE_MIGRATION	16
>>   
>>   #define KVM_HINTS_REALTIME      0
>>   
>> @@ -54,6 +55,7 @@
>>   #define MSR_KVM_POLL_CONTROL	0x4b564d05
>>   #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
>>   #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
>> +#define MSR_KVM_SEV_LIVE_MIGRATION	0x4b564d08
>>   
>>   struct kvm_steal_time {
>>   	__u64 steal;
>> @@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
>>   #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
>>   #define KVM_PV_EOI_DISABLED 0x0
>>   
>> +#define KVM_SEV_LIVE_MIGRATION_ENABLED BIT_ULL(0)
> 
> Even though the intent is to "force" userspace to intercept the MSR, I think KVM
> should at least emulate the legal bits as a nop.  Deferring completely to
> userspace is rather bizarre as there's not really anything to justify KVM
> getting involved.  It would also force userspace to filter the MSR just to
> support the hypercall.

I think this is the intention, the hypercall by itself cannot do much if
you cannot tell userspace that it's up-to-date.

On the other hand it is kind of wrong that KVM_GET_SUPPORTED_CPUID
returns the feature, but the MSR is not supported.

> Somewhat of a nit, but I think we should do something like s/ENABLED/READY,

Agreed.  I'll send a patch that puts everything together.

Paolo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-15 16:01 ` [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
@ 2021-04-20 10:52   ` Paolo Bonzini
  2021-04-21 14:44   ` Borislav Petkov
  1 sibling, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20 10:52 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh,
	kexec

On 15/04/21 18:01, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> The guest support for detecting and enabling SEV Live migration
> feature uses the following logic :
> 
>   - kvm_init_plaform() invokes check_kvm_sev_migration() which
>     checks if its booted under the EFI
> 
>     - If not EFI,
> 
>       i) check for the KVM_FEATURE_CPUID
> 
>       ii) if CPUID reports that migration is supported, issue a wrmsrl()
>           to enable the SEV live migration support
> 
>     - If EFI,
> 
>       i) check for the KVM_FEATURE_CPUID
> 
>       ii) If CPUID reports that migration is supported, read the UEFI variable which
>           indicates OVMF support for live migration
> 
>       iii) the variable indicates live migration is supported, issue a wrmsrl() to
>            enable the SEV live migration support
> 
> The EFI live migration check is done using a late_initcall() callback.
> 
> Also, ensure that _bss_decrypted section is marked as decrypted in the
> shared pages list.
> 
> Also adds kexec support for SEV Live Migration.
> 
> Reset the host's shared pages list related to kernel
> specific page encryption status settings before we load a
> new kernel by kexec. We cannot reset the complete
> shared pages list here as we need to retain the
> UEFI/OVMF firmware specific settings.
> 
> The host's shared pages list is maintained for the
> guest to keep track of all unencrypted guest memory regions,
> therefore we need to explicitly mark all shared pages as
> encrypted again before rebooting into the new guest kernel.
> 
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>

Boris, this one needs an ACK as well.

Paolo

> ---
>   arch/x86/include/asm/mem_encrypt.h |  8 ++++
>   arch/x86/kernel/kvm.c              | 55 +++++++++++++++++++++++++
>   arch/x86/mm/mem_encrypt.c          | 64 ++++++++++++++++++++++++++++++
>   3 files changed, 127 insertions(+)
> 
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 31c4df123aa0..19b77f3a62dc 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -21,6 +21,7 @@
>   extern u64 sme_me_mask;
>   extern u64 sev_status;
>   extern bool sev_enabled;
> +extern bool sev_live_migration_enabled;
>   
>   void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
>   			 unsigned long decrypted_kernel_vaddr,
> @@ -44,8 +45,11 @@ void __init sme_enable(struct boot_params *bp);
>   
>   int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
>   int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
> +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
> +					    bool enc);
>   
>   void __init mem_encrypt_free_decrypted_mem(void);
> +void __init check_kvm_sev_migration(void);
>   
>   /* Architecture __weak replacement functions */
>   void __init mem_encrypt_init(void);
> @@ -60,6 +64,7 @@ bool sev_es_active(void);
>   #else	/* !CONFIG_AMD_MEM_ENCRYPT */
>   
>   #define sme_me_mask	0ULL
> +#define sev_live_migration_enabled	false
>   
>   static inline void __init sme_early_encrypt(resource_size_t paddr,
>   					    unsigned long size) { }
> @@ -84,8 +89,11 @@ static inline int __init
>   early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
>   static inline int __init
>   early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
> +static inline void __init
> +early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
>   
>   static inline void mem_encrypt_free_decrypted_mem(void) { }
> +static inline void check_kvm_sev_migration(void) { }
>   
>   #define __bss_decrypted
>   
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 78bb0fae3982..94ef16d263a7 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -26,6 +26,7 @@
>   #include <linux/kprobes.h>
>   #include <linux/nmi.h>
>   #include <linux/swait.h>
> +#include <linux/efi.h>
>   #include <asm/timer.h>
>   #include <asm/cpu.h>
>   #include <asm/traps.h>
> @@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
>   	early_set_memory_decrypted((unsigned long) ptr, size);
>   }
>   
> +static int __init setup_kvm_sev_migration(void)
> +{
> +	efi_char16_t efi_sev_live_migration_enabled[] = L"SevLiveMigrationEnabled";
> +	efi_guid_t efi_variable_guid = MEM_ENCRYPT_GUID;
> +	efi_status_t status;
> +	unsigned long size;
> +	bool enabled;
> +
> +	/*
> +	 * check_kvm_sev_migration() invoked via kvm_init_platform() before
> +	 * this callback would have setup the indicator that live migration
> +	 * feature is supported/enabled.
> +	 */
> +	if (!sev_live_migration_enabled)
> +		return 0;
> +
> +	if (!efi_enabled(EFI_BOOT))
> +		return 0;
> +
> +	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
> +		pr_info("%s : EFI runtime services are not enabled\n", __func__);
> +		return 0;
> +	}
> +
> +	size = sizeof(enabled);
> +
> +	/* Get variable contents into buffer */
> +	status = efi.get_variable(efi_sev_live_migration_enabled,
> +				  &efi_variable_guid, NULL, &size, &enabled);
> +
> +	if (status == EFI_NOT_FOUND) {
> +		pr_info("%s : EFI live migration variable not found\n", __func__);
> +		return 0;
> +	}
> +
> +	if (status != EFI_SUCCESS) {
> +		pr_info("%s : EFI variable retrieval failed\n", __func__);
> +		return 0;
> +	}
> +
> +	if (enabled == 0) {
> +		pr_info("%s: live migration disabled in EFI\n", __func__);
> +		return 0;
> +	}
> +
> +	pr_info("%s : live migration enabled in EFI\n", __func__);
> +	wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION, KVM_SEV_LIVE_MIGRATION_ENABLED);
> +
> +	return true;
> +}
> +
> +late_initcall(setup_kvm_sev_migration);
> +
>   /*
>    * Iterate through all possible CPUs and map the memory region pointed
>    * by apf_reason, steal_time and kvm_apic_eoi as decrypted at once.
> @@ -747,6 +801,7 @@ static bool __init kvm_msi_ext_dest_id(void)
>   
>   static void __init kvm_init_platform(void)
>   {
> +	check_kvm_sev_migration();
>   	kvmclock_init();
>   	x86_platform.apic_post_init = kvm_apic_init;
>   }
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index fae9ccbd0da7..382d1d4f00f5 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -20,6 +20,7 @@
>   #include <linux/bitops.h>
>   #include <linux/dma-mapping.h>
>   #include <linux/kvm_para.h>
> +#include <linux/efi.h>
>   
>   #include <asm/tlbflush.h>
>   #include <asm/fixmap.h>
> @@ -31,6 +32,7 @@
>   #include <asm/msr.h>
>   #include <asm/cmdline.h>
>   #include <asm/kvm_para.h>
> +#include <asm/e820/api.h>
>   
>   #include "mm_internal.h"
>   
> @@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
>   
>   bool sev_enabled __section(".data");
>   
> +bool sev_live_migration_enabled __section(".data");
> +
>   /* Buffer used for early in-place encryption by BSP, no locking needed */
>   static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
>   
> @@ -237,6 +241,9 @@ static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
>   	unsigned long sz = npages << PAGE_SHIFT;
>   	unsigned long vaddr_end, vaddr_next;
>   
> +	if (!sev_live_migration_enabled)
> +		return;
> +
>   	vaddr_end = vaddr + sz;
>   
>   	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> @@ -407,6 +414,12 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
>   	return early_set_memory_enc_dec(vaddr, size, true);
>   }
>   
> +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
> +					bool enc)
> +{
> +	set_memory_enc_dec_hypercall(vaddr, npages, enc);
> +}
> +
>   /*
>    * SME and SEV are very similar but they are not the same, so there are
>    * times that the kernel will need to distinguish between SME and SEV. The
> @@ -462,6 +475,57 @@ bool force_dma_unencrypted(struct device *dev)
>   	return false;
>   }
>   
> +void __init check_kvm_sev_migration(void)
> +{
> +	if (sev_active() &&
> +	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
> +		unsigned long nr_pages;
> +		int i;
> +
> +		pr_info("KVM enable live migration\n");
> +		WRITE_ONCE(sev_live_migration_enabled, true);
> +
> +		/*
> +		 * Reset the host's shared pages list related to kernel
> +		 * specific page encryption status settings before we load a
> +		 * new kernel by kexec. Reset the page encryption status
> +		 * during early boot intead of just before kexec to avoid SMP
> +		 * races during kvm_pv_guest_cpu_reboot().
> +		 * NOTE: We cannot reset the complete shared pages list
> +		 * here as we need to retain the UEFI/OVMF firmware
> +		 * specific settings.
> +		 */
> +
> +		for (i = 0; i < e820_table->nr_entries; i++) {
> +			struct e820_entry *entry = &e820_table->entries[i];
> +
> +			if (entry->type != E820_TYPE_RAM)
> +				continue;
> +
> +			nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
> +
> +			kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS, entry->addr,
> +					   nr_pages, 1);
> +		}
> +
> +		/*
> +		 * Ensure that _bss_decrypted section is marked as decrypted in the
> +		 * shared pages list.
> +		 */
> +		nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted,
> +					PAGE_SIZE);
> +		early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted,
> +						nr_pages, 0);
> +
> +		/*
> +		 * If not booted using EFI, enable Live migration support.
> +		 */
> +		if (!efi_enabled(EFI_BOOT))
> +			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> +			       KVM_SEV_LIVE_MIGRATION_ENABLED);
> +	}
> +}
> +
>   void __init mem_encrypt_free_decrypted_mem(void)
>   {
>   	unsigned long vaddr, vaddr_end, npages;
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  2021-04-15 15:57 ` [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
@ 2021-04-20 11:10   ` Paolo Bonzini
  2021-04-20 17:24     ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20 11:10 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:57, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> This hypercall is used by the SEV guest to notify a change in the page
> encryption status to the hypervisor. The hypercall should be invoked
> only when the encryption attribute is changed from encrypted -> decrypted
> and vice versa. By default all guest pages are considered encrypted.
> 
> The hypercall exits to userspace to manage the guest shared regions and
> integrate with the userspace VMM's migration code.

I think this should be exposed to userspace as a capability, rather than 
as a CPUID bit.  Userspace then can enable the capability and set the 
CPUID bit if it wants.

The reason is that userspace could pass KVM_GET_SUPPORTED_CPUID to
KVM_SET_CPUID2 and the hypercall then would break the guest.

Paolo

> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   Documentation/virt/kvm/hypercalls.rst | 15 ++++++++++++++
>   arch/x86/include/asm/kvm_host.h       |  2 ++
>   arch/x86/kvm/svm/sev.c                |  1 +
>   arch/x86/kvm/x86.c                    | 29 +++++++++++++++++++++++++++
>   include/uapi/linux/kvm_para.h         |  1 +
>   5 files changed, 48 insertions(+)
> 
> diff --git a/Documentation/virt/kvm/hypercalls.rst b/Documentation/virt/kvm/hypercalls.rst
> index ed4fddd364ea..7aff0cebab7c 100644
> --- a/Documentation/virt/kvm/hypercalls.rst
> +++ b/Documentation/virt/kvm/hypercalls.rst
> @@ -169,3 +169,18 @@ a0: destination APIC ID
>   
>   :Usage example: When sending a call-function IPI-many to vCPUs, yield if
>   	        any of the IPI target vCPUs was preempted.
> +
> +
> +8. KVM_HC_PAGE_ENC_STATUS
> +-------------------------
> +:Architecture: x86
> +:Status: active
> +:Purpose: Notify the encryption status changes in guest page table (SEV guest)
> +
> +a0: the guest physical address of the start page
> +a1: the number of pages
> +a2: encryption attribute
> +
> +   Where:
> +	* 1: Encryption attribute is set
> +	* 0: Encryption attribute is cleared
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 3768819693e5..42eb0fe3df5d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1050,6 +1050,8 @@ struct kvm_arch {
>   
>   	bool bus_lock_detection_enabled;
>   
> +	bool page_enc_hc_enable;
> +
>   	/* Deflect RDMSR and WRMSR to user space when they trigger a #GP */
>   	u32 user_space_msr_mask;
>   	struct kvm_x86_msr_filter __rcu *msr_filter;
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index c9795a22e502..5184a0c0131a 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -197,6 +197,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
>   	sev->active = true;
>   	sev->asid = asid;
>   	INIT_LIST_HEAD(&sev->regions_list);
> +	kvm->arch.page_enc_hc_enable = true;
>   
>   	return 0;
>   
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f7d12fca397b..e8986478b653 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8208,6 +8208,13 @@ static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
>   		kvm_vcpu_yield_to(target);
>   }
>   
> +static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
> +{
> +	kvm_rax_write(vcpu, vcpu->run->hypercall.ret);
> +	++vcpu->stat.hypercalls;
> +	return kvm_skip_emulated_instruction(vcpu);
> +}
> +
>   int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>   {
>   	unsigned long nr, a0, a1, a2, a3, ret;
> @@ -8273,6 +8280,28 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>   		kvm_sched_yield(vcpu->kvm, a0);
>   		ret = 0;
>   		break;
> +	case KVM_HC_PAGE_ENC_STATUS: {
> +		u64 gpa = a0, npages = a1, enc = a2;
> +
> +		ret = -KVM_ENOSYS;
> +		if (!vcpu->kvm->arch.page_enc_hc_enable)
> +			break;
> +
> +		if (!PAGE_ALIGNED(gpa) || !npages ||
> +		    gpa_to_gfn(gpa) + npages <= gpa_to_gfn(gpa)) {
> +			ret = -EINVAL;
> +			break;
> +		}
> +
> +		vcpu->run->exit_reason        = KVM_EXIT_HYPERCALL;
> +		vcpu->run->hypercall.nr       = KVM_HC_PAGE_ENC_STATUS;
> +		vcpu->run->hypercall.args[0]  = gpa;
> +		vcpu->run->hypercall.args[1]  = npages;
> +		vcpu->run->hypercall.args[2]  = enc;
> +		vcpu->run->hypercall.longmode = op_64_bit;
> +		vcpu->arch.complete_userspace_io = complete_hypercall_exit;
> +		return 0;
> +	}
>   	default:
>   		ret = -KVM_ENOSYS;
>   		break;
> diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
> index 8b86609849b9..847b83b75dc8 100644
> --- a/include/uapi/linux/kvm_para.h
> +++ b/include/uapi/linux/kvm_para.h
> @@ -29,6 +29,7 @@
>   #define KVM_HC_CLOCK_PAIRING		9
>   #define KVM_HC_SEND_IPI		10
>   #define KVM_HC_SCHED_YIELD		11
> +#define KVM_HC_PAGE_ENC_STATUS		12
>   
>   /*
>    * hypercalls use architecture specific
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 00/12] Add AMD SEV guest live migration support
  2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
                   ` (12 preceding siblings ...)
  2021-04-16 21:43 ` [PATCH v13 00/12] Add AMD SEV guest live migration support Steve Rutherford
@ 2021-04-20 11:11 ` Paolo Bonzini
  2021-04-20 18:51   ` Borislav Petkov
  13 siblings, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20 11:11 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 15/04/21 17:52, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> The series add support for AMD SEV guest live migration commands. To protect the
> confidentiality of an SEV protected guest memory while in transit we need to
> use the SEV commands defined in SEV API spec [1].
> 
> SEV guest VMs have the concept of private and shared memory. Private memory
> is encrypted with the guest-specific key, while shared memory may be encrypted
> with hypervisor key. The commands provided by the SEV FW are meant to be used
> for the private memory only. The patch series introduces a new hypercall.
> The guest OS can use this hypercall to notify the page encryption status.
> If the page is encrypted with guest specific-key then we use SEV command during
> the migration. If page is not encrypted then fallback to default.
> 
> The patch uses the KVM_EXIT_HYPERCALL exitcode and hypercall to
> userspace exit functionality as a common interface from the guest back to the
> VMM and passing on the guest shared/unencrypted page information to the
> userspace VMM/Qemu. Qemu can consult this information during migration to know
> whether the page is encrypted.
> 
> This section descibes how the SEV live migration feature is negotiated
> between the host and guest, the host indicates this feature support via
> KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> sets a UEFI enviroment variable indicating OVMF support for live
> migration, the guest kernel also detects the host support for this
> feature via cpuid and in case of an EFI boot verifies if OVMF also
> supports this feature by getting the UEFI enviroment variable and if it
> set then enables live migration feature on host by writing to a custom
> MSR, if not booted under EFI, then it simply enables the feature by
> again writing to the custom MSR. The MSR is also handled by the
> userspace VMM/Qemu.
> 
> A branch containing these patches is available here:
> https://github.com/AMDESE/linux/tree/sev-migration-v13
> 
> [1] https://developer.amd.com/wp-content/resources/55766.PDF

I have queued patches 1-6.

For patches 8 and 10 I will post my own version based on my review and 
feedback.

For guest patches, please repost separately so that x86 maintainers will 
notice them and ack them.

Paolo

> Changes since v12:
> - Reset page encryption status during early boot instead of just
>    before the kexec to avoid SMP races during kvm_pv_guest_cpu_reboot().
> - Remove incorrect log message in case of non-EFI boot and implicit
>    enabling of SEV live migration feature.
> 
> Changes since v11:
> - Clean up and remove kvm_x86_ops callback for page_enc_status_hc and
>    instead add a new per-VM flag to support/enable the page encryption
>    status hypercall.
> - Remove KVM_EXIT_DMA_SHARE/KVM_EXIT_DMA_UNSHARE exitcodes and instead
>    use the KVM_EXIT_HYPERCALL exitcode for page encryption status
>    hypercall to userspace functionality.
> 
> Changes since v10:
> - Adds new KVM_EXIT_DMA_SHARE/KVM_EXIT_DMA_UNSHARE hypercall to
>    userspace exit functionality as a common interface from the guest back to the
>    KVM and passing on the guest shared/unencrypted region information to the
>    userspace VMM/Qemu. KVM/host kernel does not maintain the guest shared
>    memory regions information anymore.
> - Remove implicit enabling of SEV live migration feature for an SEV
>    guest, now this is explicitly in control of the userspace VMM/Qemu.
> - Custom MSR handling is also now moved into userspace VMM/Qemu.
> - As KVM does not maintain the guest shared memory region information
>    anymore, sev_dbg_crypt() cannot bypass unencrypted guest memory
>    regions without support from userspace VMM/Qemu.
> 
> Changes since v9:
> - Transitioning from page encryption bitmap to the shared pages list
>    to keep track of guest's shared/unencrypted memory regions.
> - Move back to marking the complete _bss_decrypted section as
>    decrypted in the shared pages list.
> - Invoke a new function check_kvm_sev_migration() via kvm_init_platform()
>    for guest to query for host-side support for SEV live migration
>    and to enable the SEV live migration feature, to avoid
>    #ifdefs in code
> - Rename MSR_KVM_SEV_LIVE_MIG_EN to MSR_KVM_SEV_LIVE_MIGRATION.
> - Invoke a new function handle_unencrypted_region() from
>    sev_dbg_crypt() to bypass unencrypted guest memory regions.
> 
> Changes since v8:
> - Rebasing to kvm next branch.
> - Fixed and added comments as per review feedback on v8 patches.
> - Removed implicitly enabling live migration for incoming VMs in
>    in KVM_SET_PAGE_ENC_BITMAP, it is now done via KVM_SET_MSR ioctl.
> - Adds support for bypassing unencrypted guest memory regions for
>    DBG_DECRYPT API calls, guest memory region encryption status in
>    sev_dbg_decrypt() is referenced using the page encryption bitmap.
> 
> Changes since v7:
> - Removed the hypervisor specific hypercall/paravirt callback for
>    SEV live migration and moved back to calling kvm_sev_hypercall3
>    directly.
> - Fix build errors as
>    Reported-by: kbuild test robot <lkp@intel.com>, specifically fixed
>    build error when CONFIG_HYPERVISOR_GUEST=y and
>    CONFIG_AMD_MEM_ENCRYPT=n.
> - Implicitly enabled live migration for incoming VM(s) to handle
>    A->B->C->... VM migrations.
> - Fixed Documentation as per comments on v6 patches.
> - Fixed error return path in sev_send_update_data() as per comments
>    on v6 patches.
> 
> Changes since v6:
> - Rebasing to mainline and refactoring to the new split SVM
>    infrastructre.
> - Move to static allocation of the unified Page Encryption bitmap
>    instead of the dynamic resizing of the bitmap, the static allocation
>    is done implicitly by extending kvm_arch_commit_memory_region() callack
>    to add svm specific x86_ops which can read the userspace provided memory
>    region/memslots and calculate the amount of guest RAM managed by the KVM
>    and grow the bitmap.
> - Fixed KVM_SET_PAGE_ENC_BITMAP ioctl to set the whole bitmap instead
>    of simply clearing specific bits.
> - Removed KVM_PAGE_ENC_BITMAP_RESET ioctl, which is now performed using
>    KVM_SET_PAGE_ENC_BITMAP.
> - Extended guest support for enabling Live Migration feature by adding a
>    check for UEFI environment variable indicating OVMF support for Live
>    Migration feature and additionally checking for KVM capability for the
>    same feature. If not booted under EFI, then we simply check for KVM
>    capability.
> - Add hypervisor specific hypercall for SEV live migration by adding
>    a new paravirt callback as part of x86_hyper_runtime.
>    (x86 hypervisor specific runtime callbacks)
> - Moving MSR handling for MSR_KVM_SEV_LIVE_MIG_EN into svm/sev code
>    and adding check for SEV live migration enabled by guest in the
>    KVM_GET_PAGE_ENC_BITMAP ioctl.
> - Instead of the complete __bss_decrypted section, only specific variables
>    such as hv_clock_boot and wall_clock are marked as decrypted in the
>    page encryption bitmap
> 
> Changes since v5:
> - Fix build errors as
>    Reported-by: kbuild test robot <lkp@intel.com>
> 
> Changes since v4:
> - Host support has been added to extend KVM capabilities/feature bits to
>    include a new KVM_FEATURE_SEV_LIVE_MIGRATION, which the guest can
>    query for host-side support for SEV live migration and a new custom MSR
>    MSR_KVM_SEV_LIVE_MIG_EN is added for guest to enable the SEV live
>    migration feature.
> - Ensure that _bss_decrypted section is marked as decrypted in the
>    page encryption bitmap.
> - Fixing KVM_GET_PAGE_ENC_BITMAP ioctl to return the correct bitmap
>    as per the number of pages being requested by the user. Ensure that
>    we only copy bmap->num_pages bytes in the userspace buffer, if
>    bmap->num_pages is not byte aligned we read the trailing bits
>    from the userspace and copy those bits as is. This fixes guest
>    page(s) corruption issues observed after migration completion.
> - Add kexec support for SEV Live Migration to reset the host's
>    page encryption bitmap related to kernel specific page encryption
>    status settings before we load a new kernel by kexec. We cannot
>    reset the complete page encryption bitmap here as we need to
>    retain the UEFI/OVMF firmware specific settings.
> 
> Changes since v3:
> - Rebasing to mainline and testing.
> - Adding a new KVM_PAGE_ENC_BITMAP_RESET ioctl, which resets the
>    page encryption bitmap on a guest reboot event.
> - Adding a more reliable sanity check for GPA range being passed to
>    the hypercall to ensure that guest MMIO ranges are also marked
>    in the page encryption bitmap.
> 
> Changes since v2:
>   - reset the page encryption bitmap on vcpu reboot
> 
> Changes since v1:
>   - Add support to share the page encryption between the source and target
>     machine.
>   - Fix review feedbacks from Tom Lendacky.
>   - Add check to limit the session blob length.
>   - Update KVM_GET_PAGE_ENC_BITMAP icotl to use the base_gfn instead of
>     the memory slot when querying the bitmap.
> 
> Ashish Kalra (4):
>    KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
>    KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature &
>      Custom MSR.
>    EFI: Introduce the new AMD Memory Encryption GUID.
>    x86/kvm: Add guest support for detecting and enabling SEV Live
>      Migration feature.
> 
> Brijesh Singh (8):
>    KVM: SVM: Add KVM_SEV SEND_START command
>    KVM: SVM: Add KVM_SEND_UPDATE_DATA command
>    KVM: SVM: Add KVM_SEV_SEND_FINISH command
>    KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
>    KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
>    KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
>    KVM: x86: Add AMD SEV specific Hypercall3
>    mm: x86: Invoke hypercall when page encryption status is changed
> 
>   .../virt/kvm/amd-memory-encryption.rst        | 120 +++++
>   Documentation/virt/kvm/cpuid.rst              |   5 +
>   Documentation/virt/kvm/hypercalls.rst         |  15 +
>   Documentation/virt/kvm/msr.rst                |  12 +
>   arch/x86/include/asm/kvm_host.h               |   2 +
>   arch/x86/include/asm/kvm_para.h               |  12 +
>   arch/x86/include/asm/mem_encrypt.h            |   8 +
>   arch/x86/include/asm/paravirt.h               |  10 +
>   arch/x86/include/asm/paravirt_types.h         |   2 +
>   arch/x86/include/uapi/asm/kvm_para.h          |   4 +
>   arch/x86/kernel/kvm.c                         |  55 +++
>   arch/x86/kernel/paravirt.c                    |   1 +
>   arch/x86/kvm/cpuid.c                          |   3 +-
>   arch/x86/kvm/svm/sev.c                        | 454 ++++++++++++++++++
>   arch/x86/kvm/x86.c                            |  29 ++
>   arch/x86/mm/mem_encrypt.c                     | 121 ++++-
>   arch/x86/mm/pat/set_memory.c                  |   7 +
>   include/linux/efi.h                           |   1 +
>   include/linux/psp-sev.h                       |   8 +-
>   include/uapi/linux/kvm.h                      |  39 ++
>   include/uapi/linux/kvm_para.h                 |   1 +
>   21 files changed, 903 insertions(+), 6 deletions(-)
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  2021-04-20 11:10   ` Paolo Bonzini
@ 2021-04-20 17:24     ` Sean Christopherson
  0 siblings, 0 replies; 43+ messages in thread
From: Sean Christopherson @ 2021-04-20 17:24 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ashish Kalra, tglx, mingo, hpa, joro, bp, thomas.lendacky, x86,
	kvm, linux-kernel, srutherford, venu.busireddy, brijesh.singh

On Tue, Apr 20, 2021, Paolo Bonzini wrote:
> On 15/04/21 17:57, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@amd.com>
> > 
> > This hypercall is used by the SEV guest to notify a change in the page
> > encryption status to the hypervisor. The hypercall should be invoked
> > only when the encryption attribute is changed from encrypted -> decrypted
> > and vice versa. By default all guest pages are considered encrypted.
> > 
> > The hypercall exits to userspace to manage the guest shared regions and
> > integrate with the userspace VMM's migration code.
> 
> I think this should be exposed to userspace as a capability, rather than as
> a CPUID bit.  Userspace then can enable the capability and set the CPUID bit
> if it wants.
> 
> The reason is that userspace could pass KVM_GET_SUPPORTED_CPUID to
> KVM_SET_CPUID2 and the hypercall then would break the guest.

Right, and that's partly why I was advocating that KVM emulate the MSR as a nop.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 00/12] Add AMD SEV guest live migration support
  2021-04-20 11:11 ` Paolo Bonzini
@ 2021-04-20 18:51   ` Borislav Petkov
  2021-04-20 19:08     ` Paolo Bonzini
  0 siblings, 1 reply; 43+ messages in thread
From: Borislav Petkov @ 2021-04-20 18:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ashish Kalra, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

Hey Paolo,

On Tue, Apr 20, 2021 at 01:11:31PM +0200, Paolo Bonzini wrote:
> I have queued patches 1-6.
> 
> For patches 8 and 10 I will post my own version based on my review and
> feedback.

can you pls push that tree up to here to a branch somewhere so that ...
 
> For guest patches, please repost separately so that x86 maintainers will
> notice them and ack them.

... I can take a look at the guest bits in the full context of the
changes?

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 00/12] Add AMD SEV guest live migration support
  2021-04-20 18:51   ` Borislav Petkov
@ 2021-04-20 19:08     ` Paolo Bonzini
  2021-04-20 20:28       ` Borislav Petkov
  0 siblings, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-20 19:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Ashish Kalra, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On 20/04/21 20:51, Borislav Petkov wrote:
> Hey Paolo,
> 
> On Tue, Apr 20, 2021 at 01:11:31PM +0200, Paolo Bonzini wrote:
>> I have queued patches 1-6.
>>
>> For patches 8 and 10 I will post my own version based on my review and
>> feedback.
> 
> can you pls push that tree up to here to a branch somewhere so that ...

Yup, for now it's all at kvm/queue and it will land in kvm/next tomorrow 
(hopefully).  The guest interface patches in KVM are very near the top.

Paolo

>> For guest patches, please repost separately so that x86 maintainers will
>> notice them and ack them.
> 
> ... I can take a look at the guest bits in the full context of the
> changes?


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 00/12] Add AMD SEV guest live migration support
  2021-04-20 19:08     ` Paolo Bonzini
@ 2021-04-20 20:28       ` Borislav Petkov
  0 siblings, 0 replies; 43+ messages in thread
From: Borislav Petkov @ 2021-04-20 20:28 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ashish Kalra, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On Tue, Apr 20, 2021 at 09:08:26PM +0200, Paolo Bonzini wrote:
> Yup, for now it's all at kvm/queue and it will land in kvm/next tomorrow
> (hopefully).  The guest interface patches in KVM are very near the top.

Thx, I'll have a look tomorrow.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-15 15:57 ` [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
  2021-04-20  9:39   ` Paolo Bonzini
@ 2021-04-21 10:05   ` Borislav Petkov
  2021-04-21 12:00     ` Paolo Bonzini
  2021-04-21 12:12     ` Ashish Kalra
  1 sibling, 2 replies; 43+ messages in thread
From: Borislav Petkov @ 2021-04-21 10:05 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On Thu, Apr 15, 2021 at 03:57:26PM +0000, Ashish Kalra wrote:
> +static inline void page_encryption_changed(unsigned long vaddr, int npages,
> +						bool enc)

When you see a function name "page_encryption_changed", what does that
tell you about what that function does?

Dunno but it doesn't tell me a whole lot.

Now look at the other function names in struct pv_mmu_ops.

See the difference?

> +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,

If I had to guess what that function does just by reading its name, it
sets a memory encryption/decryption hypercall.

Am I close?

> +					bool enc)
> +{
> +	unsigned long sz = npages << PAGE_SHIFT;
> +	unsigned long vaddr_end, vaddr_next;
> +
> +	vaddr_end = vaddr + sz;
> +
> +	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> +		int psize, pmask, level;
> +		unsigned long pfn;
> +		pte_t *kpte;
> +
> +		kpte = lookup_address(vaddr, &level);
> +		if (!kpte || pte_none(*kpte))
> +			return;
> +
> +		switch (level) {
> +		case PG_LEVEL_4K:
> +			pfn = pte_pfn(*kpte);
> +			break;
> +		case PG_LEVEL_2M:
> +			pfn = pmd_pfn(*(pmd_t *)kpte);
> +			break;
> +		case PG_LEVEL_1G:
> +			pfn = pud_pfn(*(pud_t *)kpte);
> +			break;
> +		default:
> +			return;
> +		}

Pretty much that same thing is in __set_clr_pte_enc(). Make a helper
function pls.

> +
> +		psize = page_level_size(level);
> +		pmask = page_level_mask(level);
> +
> +		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
> +				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
> +
> +		vaddr_next = (vaddr & pmask) + psize;
> +	}

As with other patches from Brijesh, that should be a while loop. :)

> +}
> +
>  static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>  {
>  	pgprot_t old_prot, new_prot;
> @@ -286,12 +329,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>  static int __init early_set_memory_enc_dec(unsigned long vaddr,
>  					   unsigned long size, bool enc)
>  {
> -	unsigned long vaddr_end, vaddr_next;
> +	unsigned long vaddr_end, vaddr_next, start;
>  	unsigned long psize, pmask;
>  	int split_page_size_mask;
>  	int level, ret;
>  	pte_t *kpte;
>  
> +	start = vaddr;
>  	vaddr_next = vaddr;
>  	vaddr_end = vaddr + size;
>  
> @@ -346,6 +390,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
>  
>  	ret = 0;
>  
> +	set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
> +					enc);
>  out:
>  	__flush_tlb_all();
>  	return ret;
> @@ -481,6 +527,15 @@ void __init mem_encrypt_init(void)
>  	if (sev_active() && !sev_es_active())
>  		static_branch_enable(&sev_enable_key);
>  
> +#ifdef CONFIG_PARAVIRT
> +	/*
> +	 * With SEV, we need to make a hypercall when page encryption state is
> +	 * changed.
> +	 */
> +	if (sev_active())
> +		pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
> +#endif

There's already a sev_active() check above it. Merge the two pls.

> +
>  	print_mem_encrypt_feature_info();
>  }
>  
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 16f878c26667..3576b583ac65 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -27,6 +27,7 @@
>  #include <asm/proto.h>
>  #include <asm/memtype.h>
>  #include <asm/set_memory.h>
> +#include <asm/paravirt.h>
>  
>  #include "../mm_internal.h"
>  
> @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>  	 */
>  	cpa_flush(&cpa, 0);
>  
> +	/* Notify hypervisor that a given memory range is mapped encrypted
> +	 * or decrypted. The hypervisor will use this information during the
> +	 * VM migration.
> +	 */

Kernel comments style is:

	/*
	 * A sentence ending with a full-stop.
	 * Another sentence. ...
	 * More sentences. ...
	 */

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-21 10:05   ` Borislav Petkov
@ 2021-04-21 12:00     ` Paolo Bonzini
  2021-04-21 14:09       ` Borislav Petkov
  2021-04-21 12:12     ` Ashish Kalra
  1 sibling, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-21 12:00 UTC (permalink / raw)
  To: Borislav Petkov, Ashish Kalra
  Cc: tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm, linux-kernel,
	srutherford, seanjc, venu.busireddy, brijesh.singh

On 21/04/21 12:05, Borislav Petkov wrote:
> On Thu, Apr 15, 2021 at 03:57:26PM +0000, Ashish Kalra wrote:
>> +static inline void page_encryption_changed(unsigned long vaddr, int npages,
>> +						bool enc)
> 
> When you see a function name "page_encryption_changed", what does that
> tell you about what that function does?
> 
> Dunno but it doesn't tell me a whole lot.
> 
> Now look at the other function names in struct pv_mmu_ops.
> 
> See the difference?
> 
>> +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
> 
> If I had to guess what that function does just by reading its name, it
> sets a memory encryption/decryption hypercall.
> 
> Am I close?

The words are right but the order is wrong (more like "hypercall to set 
some memory's encrypted/decrypted state").  Perhaps? 
kvm_hypercall_set_page_enc_status.

page_encryption_changed does not sound bad to me though, it's a 
notification-like function name.  Maybe notify_page_enc_status_changed?

Paolo

>> +					bool enc)
>> +{
>> +	unsigned long sz = npages << PAGE_SHIFT;
>> +	unsigned long vaddr_end, vaddr_next;
>> +
>> +	vaddr_end = vaddr + sz;
>> +
>> +	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
>> +		int psize, pmask, level;
>> +		unsigned long pfn;
>> +		pte_t *kpte;
>> +
>> +		kpte = lookup_address(vaddr, &level);
>> +		if (!kpte || pte_none(*kpte))
>> +			return;
>> +
>> +		switch (level) {
>> +		case PG_LEVEL_4K:
>> +			pfn = pte_pfn(*kpte);
>> +			break;
>> +		case PG_LEVEL_2M:
>> +			pfn = pmd_pfn(*(pmd_t *)kpte);
>> +			break;
>> +		case PG_LEVEL_1G:
>> +			pfn = pud_pfn(*(pud_t *)kpte);
>> +			break;
>> +		default:
>> +			return;
>> +		}
> 
> Pretty much that same thing is in __set_clr_pte_enc(). Make a helper
> function pls.
> 
>> +
>> +		psize = page_level_size(level);
>> +		pmask = page_level_mask(level);
>> +
>> +		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
>> +				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
>> +
>> +		vaddr_next = (vaddr & pmask) + psize;
>> +	}
> 
> As with other patches from Brijesh, that should be a while loop. :)
> 
>> +}
>> +
>>   static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>>   {
>>   	pgprot_t old_prot, new_prot;
>> @@ -286,12 +329,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>>   static int __init early_set_memory_enc_dec(unsigned long vaddr,
>>   					   unsigned long size, bool enc)
>>   {
>> -	unsigned long vaddr_end, vaddr_next;
>> +	unsigned long vaddr_end, vaddr_next, start;
>>   	unsigned long psize, pmask;
>>   	int split_page_size_mask;
>>   	int level, ret;
>>   	pte_t *kpte;
>>   
>> +	start = vaddr;
>>   	vaddr_next = vaddr;
>>   	vaddr_end = vaddr + size;
>>   
>> @@ -346,6 +390,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
>>   
>>   	ret = 0;
>>   
>> +	set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
>> +					enc);
>>   out:
>>   	__flush_tlb_all();
>>   	return ret;
>> @@ -481,6 +527,15 @@ void __init mem_encrypt_init(void)
>>   	if (sev_active() && !sev_es_active())
>>   		static_branch_enable(&sev_enable_key);
>>   
>> +#ifdef CONFIG_PARAVIRT
>> +	/*
>> +	 * With SEV, we need to make a hypercall when page encryption state is
>> +	 * changed.
>> +	 */
>> +	if (sev_active())
>> +		pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
>> +#endif
> 
> There's already a sev_active() check above it. Merge the two pls.
> 
>> +
>>   	print_mem_encrypt_feature_info();
>>   }
>>   
>> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
>> index 16f878c26667..3576b583ac65 100644
>> --- a/arch/x86/mm/pat/set_memory.c
>> +++ b/arch/x86/mm/pat/set_memory.c
>> @@ -27,6 +27,7 @@
>>   #include <asm/proto.h>
>>   #include <asm/memtype.h>
>>   #include <asm/set_memory.h>
>> +#include <asm/paravirt.h>
>>   
>>   #include "../mm_internal.h"
>>   
>> @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>>   	 */
>>   	cpa_flush(&cpa, 0);
>>   
>> +	/* Notify hypervisor that a given memory range is mapped encrypted
>> +	 * or decrypted. The hypervisor will use this information during the
>> +	 * VM migration.
>> +	 */
> 
> Kernel comments style is:
> 
> 	/*
> 	 * A sentence ending with a full-stop.
> 	 * Another sentence. ...
> 	 * More sentences. ...
> 	 */
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-21 10:05   ` Borislav Petkov
  2021-04-21 12:00     ` Paolo Bonzini
@ 2021-04-21 12:12     ` Ashish Kalra
  2021-04-21 13:50       ` Brijesh Singh
  2021-04-21 13:52       ` Borislav Petkov
  1 sibling, 2 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-21 12:12 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: pbonzini, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On Wed, Apr 21, 2021 at 12:05:08PM +0200, Borislav Petkov wrote:
> On Thu, Apr 15, 2021 at 03:57:26PM +0000, Ashish Kalra wrote:
> > +static inline void page_encryption_changed(unsigned long vaddr, int npages,
> > +						bool enc)
> 
> When you see a function name "page_encryption_changed", what does that
> tell you about what that function does?
> 
> Dunno but it doesn't tell me a whole lot.
> 
> Now look at the other function names in struct pv_mmu_ops.
> 
> See the difference?
> 
> > +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
> 
> If I had to guess what that function does just by reading its name, it
> sets a memory encryption/decryption hypercall.
> 
> Am I close?
> 
> > +					bool enc)
> > +{
> > +	unsigned long sz = npages << PAGE_SHIFT;
> > +	unsigned long vaddr_end, vaddr_next;
> > +
> > +	vaddr_end = vaddr + sz;
> > +
> > +	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> > +		int psize, pmask, level;
> > +		unsigned long pfn;
> > +		pte_t *kpte;
> > +
> > +		kpte = lookup_address(vaddr, &level);
> > +		if (!kpte || pte_none(*kpte))
> > +			return;
> > +
> > +		switch (level) {
> > +		case PG_LEVEL_4K:
> > +			pfn = pte_pfn(*kpte);
> > +			break;
> > +		case PG_LEVEL_2M:
> > +			pfn = pmd_pfn(*(pmd_t *)kpte);
> > +			break;
> > +		case PG_LEVEL_1G:
> > +			pfn = pud_pfn(*(pud_t *)kpte);
> > +			break;
> > +		default:
> > +			return;
> > +		}
> 
> Pretty much that same thing is in __set_clr_pte_enc(). Make a helper
> function pls.
> 

Yes, both have some common code, but it is only this page level/size
check, and they pretty much do different things with page size
evaluation, i think it will be cleaner to keep the code separately for
both these functions.

> > +
> > +		psize = page_level_size(level);
> > +		pmask = page_level_mask(level);
> > +
> > +		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
> > +				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
> > +
> > +		vaddr_next = (vaddr & pmask) + psize;
> > +	}
> 
> As with other patches from Brijesh, that should be a while loop. :)
>

I see that early_set_memory_enc_dec() is also using a for loop, so which
patches are you referring to ?

> > +}
> > +
> >  static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
> >  {
> >  	pgprot_t old_prot, new_prot;
> > @@ -286,12 +329,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
> >  static int __init early_set_memory_enc_dec(unsigned long vaddr,
> >  					   unsigned long size, bool enc)
> >  {
> > -	unsigned long vaddr_end, vaddr_next;
> > +	unsigned long vaddr_end, vaddr_next, start;
> >  	unsigned long psize, pmask;
> >  	int split_page_size_mask;
> >  	int level, ret;
> >  	pte_t *kpte;
> >  
> > +	start = vaddr;
> >  	vaddr_next = vaddr;
> >  	vaddr_end = vaddr + size;
> >  
> > @@ -346,6 +390,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
> >  
> >  	ret = 0;
> >  
> > +	set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT,
> > +					enc);
> >  out:
> >  	__flush_tlb_all();
> >  	return ret;
> > @@ -481,6 +527,15 @@ void __init mem_encrypt_init(void)
> >  	if (sev_active() && !sev_es_active())
> >  		static_branch_enable(&sev_enable_key);
> >  
> > +#ifdef CONFIG_PARAVIRT
> > +	/*
> > +	 * With SEV, we need to make a hypercall when page encryption state is
> > +	 * changed.
> > +	 */
> > +	if (sev_active())
> > +		pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall;
> > +#endif
> 
> There's already a sev_active() check above it. Merge the two pls.
>

Ok. 

> > +
> >  	print_mem_encrypt_feature_info();
> >  }
> >  
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index 16f878c26667..3576b583ac65 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -27,6 +27,7 @@
> >  #include <asm/proto.h>
> >  #include <asm/memtype.h>
> >  #include <asm/set_memory.h>
> > +#include <asm/paravirt.h>
> >  
> >  #include "../mm_internal.h"
> >  
> > @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
> >  	 */
> >  	cpa_flush(&cpa, 0);
> >  
> > +	/* Notify hypervisor that a given memory range is mapped encrypted
> > +	 * or decrypted. The hypervisor will use this information during the
> > +	 * VM migration.
> > +	 */
> 
> Kernel comments style is:
> 
> 	/*
> 	 * A sentence ending with a full-stop.
> 	 * Another sentence. ...
> 	 * More sentences. ...
> 	 */

Ok.

> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7CAshish.Kalra%40amd.com%7Cf953299226ec42b5077308d904ad427c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637545964477197662%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=XS9tvx%2BlDPCKGFgsv7jruSrF6kUzAMIqUhBke7rtO5k%3D&amp;reserved=0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-21 12:12     ` Ashish Kalra
@ 2021-04-21 13:50       ` Brijesh Singh
  2021-04-21 13:52       ` Borislav Petkov
  1 sibling, 0 replies; 43+ messages in thread
From: Brijesh Singh @ 2021-04-21 13:50 UTC (permalink / raw)
  To: Ashish Kalra, Borislav Petkov
  Cc: brijesh.singh, pbonzini, tglx, mingo, hpa, joro, thomas.lendacky,
	x86, kvm, linux-kernel, srutherford, seanjc, venu.busireddy


On 4/21/21 7:12 AM, Ashish Kalra wrote:
>>> +
>>> +		psize = page_level_size(level);
>>> +		pmask = page_level_mask(level);
>>> +
>>> +		kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS,
>>> +				   pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc);
>>> +
>>> +		vaddr_next = (vaddr & pmask) + psize;
>>> +	}
>> As with other patches from Brijesh, that should be a while loop. :)
>>
> I see that early_set_memory_enc_dec() is also using a for loop, so which
> patches are you referring to ?
>
I guess Boris is referring to my SNP patches. Please go ahead and use
the while loop as recommended by Boris.

-Brijesh


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-21 12:12     ` Ashish Kalra
  2021-04-21 13:50       ` Brijesh Singh
@ 2021-04-21 13:52       ` Borislav Petkov
  1 sibling, 0 replies; 43+ messages in thread
From: Borislav Petkov @ 2021-04-21 13:52 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On Wed, Apr 21, 2021 at 12:12:13PM +0000, Ashish Kalra wrote:
> Yes, both have some common code, but it is only this page level/size
> ...

See below for what I mean. Diff ontop of yours.

> I see that early_set_memory_enc_dec() is also using a for loop, so which
> patches are you referring to ?

The SNP guest set has this pattern:

https://lkml.kernel.org/r/20210408114049.GI10192@zn.tnic

---

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index b1d59d2b3bf6..e823645101ee 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -232,6 +232,37 @@ void __init sev_setup_arch(void)
 	swiotlb_adjust_size(size);
 }
 
+static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
+{
+	unsigned long pfn = 0;
+	pgprot_t prot;
+
+	switch (level) {
+	case PG_LEVEL_4K:
+		pfn = pte_pfn(*kpte);
+		prot = pte_pgprot(*kpte);
+		break;
+
+	case PG_LEVEL_2M:
+		pfn = pmd_pfn(*(pmd_t *)kpte);
+		prot = pmd_pgprot(*(pmd_t *)kpte);
+		break;
+
+	case PG_LEVEL_1G:
+		pfn = pud_pfn(*(pud_t *)kpte);
+		prot = pud_pgprot(*(pud_t *)kpte);
+		break;
+
+	default:
+		return 0;
+	}
+
+	if (ret_prot)
+		*ret_prot = prot;
+
+	return pfn;
+}
+
 static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
 					bool enc)
 {
@@ -249,19 +280,9 @@ static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages,
 		if (!kpte || pte_none(*kpte))
 			return;
 
-		switch (level) {
-		case PG_LEVEL_4K:
-			pfn = pte_pfn(*kpte);
-			break;
-		case PG_LEVEL_2M:
-			pfn = pmd_pfn(*(pmd_t *)kpte);
-			break;
-		case PG_LEVEL_1G:
-			pfn = pud_pfn(*(pud_t *)kpte);
-			break;
-		default:
-			return;
-		}
+		pfn = pg_level_to_pfn(level, kpte, NULL);
+		if (!pfn)
+			continue;
 
 		psize = page_level_size(level);
 		pmask = page_level_mask(level);
@@ -279,22 +300,9 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 	unsigned long pfn, pa, size;
 	pte_t new_pte;
 
-	switch (level) {
-	case PG_LEVEL_4K:
-		pfn = pte_pfn(*kpte);
-		old_prot = pte_pgprot(*kpte);
-		break;
-	case PG_LEVEL_2M:
-		pfn = pmd_pfn(*(pmd_t *)kpte);
-		old_prot = pmd_pgprot(*(pmd_t *)kpte);
-		break;
-	case PG_LEVEL_1G:
-		pfn = pud_pfn(*(pud_t *)kpte);
-		old_prot = pud_pgprot(*(pud_t *)kpte);
-		break;
-	default:
+	pfn = pg_level_to_pfn(level, kpte, &old_prot);
+	if (!pfn)
 		return;
-	}
 
 	new_prot = old_prot;
 	if (enc)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed
  2021-04-21 12:00     ` Paolo Bonzini
@ 2021-04-21 14:09       ` Borislav Petkov
  0 siblings, 0 replies; 43+ messages in thread
From: Borislav Petkov @ 2021-04-21 14:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ashish Kalra, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh

On Wed, Apr 21, 2021 at 02:00:42PM +0200, Paolo Bonzini wrote:
> The words are right but the order is wrong (more like "hypercall to set some
> memory's encrypted/decrypted state").  Perhaps?
> kvm_hypercall_set_page_enc_status.

Yap.

> page_encryption_changed does not sound bad to me though, it's a
> notification-like function name. Maybe notify_page_enc_status_changed?

Yap again. Those are better.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-15 16:01 ` [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
  2021-04-20 10:52   ` Paolo Bonzini
@ 2021-04-21 14:44   ` Borislav Petkov
  2021-04-21 15:22     ` Ashish Kalra
  2021-04-21 15:38     ` Paolo Bonzini
  1 sibling, 2 replies; 43+ messages in thread
From: Borislav Petkov @ 2021-04-21 14:44 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh,
	kexec

On Thu, Apr 15, 2021 at 04:01:16PM +0000, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> The guest support for detecting and enabling SEV Live migration
> feature uses the following logic :
> 
>  - kvm_init_plaform() invokes check_kvm_sev_migration() which
>    checks if its booted under the EFI
> 
>    - If not EFI,
> 
>      i) check for the KVM_FEATURE_CPUID

Where do you do that?

$ git grep KVM_FEATURE_CPUID
$

Do you mean

	kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)

per chance?

> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 78bb0fae3982..94ef16d263a7 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -26,6 +26,7 @@
>  #include <linux/kprobes.h>
>  #include <linux/nmi.h>
>  #include <linux/swait.h>
> +#include <linux/efi.h>
>  #include <asm/timer.h>
>  #include <asm/cpu.h>
>  #include <asm/traps.h>
> @@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
>  	early_set_memory_decrypted((unsigned long) ptr, size);
>  }
>  
> +static int __init setup_kvm_sev_migration(void)

kvm_init_sev_migration() or so.

...

> @@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
>  
>  bool sev_enabled __section(".data");
>  
> +bool sev_live_migration_enabled __section(".data");

Pls add a function called something like:

bool sev_feature_enabled(enum sev_feature)

and gets SEV_FEATURE_LIVE_MIGRATION and then use it instead of adding
yet another boolean which contains whether some aspect of SEV has been
enabled or not.

Then add a

static enum sev_feature sev_features;

in mem_encrypt.c and that function above will query that sev_features
enum for set flags.

Then, if you feel bored, you could convert sme_active, sev_active,
sev_es_active, mem_encrypt_active and whetever else code needs to query
any aspect of SEV being enabled or not, to that function.

> +void __init check_kvm_sev_migration(void)
> +{
> +	if (sev_active() &&
> +	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {

Save an indentation level:

	if (!sev_active() ||
	    !kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION))
		return;

> +		unsigned long nr_pages;
> +		int i;
> +
> +		pr_info("KVM enable live migration\n");

That should be at the end of the function and say:

		pr_info("KVM live migration enabled.\n");

> +		WRITE_ONCE(sev_live_migration_enabled, true);

Why WRITE_ONCE?

And that needs to go to the end of the function too.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-21 14:44   ` Borislav Petkov
@ 2021-04-21 15:22     ` Ashish Kalra
  2021-04-21 15:32       ` Borislav Petkov
  2021-04-21 15:38     ` Paolo Bonzini
  1 sibling, 1 reply; 43+ messages in thread
From: Ashish Kalra @ 2021-04-21 15:22 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh,
	kexec

On Wed, Apr 21, 2021 at 04:44:02PM +0200, Borislav Petkov wrote:
> On Thu, Apr 15, 2021 at 04:01:16PM +0000, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@amd.com>
> > 
> > The guest support for detecting and enabling SEV Live migration
> > feature uses the following logic :
> > 
> >  - kvm_init_plaform() invokes check_kvm_sev_migration() which
> >    checks if its booted under the EFI
> > 
> >    - If not EFI,
> > 
> >      i) check for the KVM_FEATURE_CPUID
> 
> Where do you do that?
> 
> $ git grep KVM_FEATURE_CPUID
> $
> 
> Do you mean
> 
> 	kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)
> 
> per chance?
> 

Yes, the above mentions to get KVM_FEATURE_CPUID and then check if live
migration feature is supported, i.e.,
kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION). The above comments
are written more generically.

> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 78bb0fae3982..94ef16d263a7 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -26,6 +26,7 @@
> >  #include <linux/kprobes.h>
> >  #include <linux/nmi.h>
> >  #include <linux/swait.h>
> > +#include <linux/efi.h>
> >  #include <asm/timer.h>
> >  #include <asm/cpu.h>
> >  #include <asm/traps.h>
> > @@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
> >  	early_set_memory_decrypted((unsigned long) ptr, size);
> >  }
> >  
> > +static int __init setup_kvm_sev_migration(void)
> 
> kvm_init_sev_migration() or so.
> 
> ...
> 
> > @@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
> >  
> >  bool sev_enabled __section(".data");
> >  
> > +bool sev_live_migration_enabled __section(".data");
> 
> Pls add a function called something like:
> 
> bool sev_feature_enabled(enum sev_feature)
> 
> and gets SEV_FEATURE_LIVE_MIGRATION and then use it instead of adding
> yet another boolean which contains whether some aspect of SEV has been
> enabled or not.
> 
> Then add a
> 
> static enum sev_feature sev_features;
> 
> in mem_encrypt.c and that function above will query that sev_features
> enum for set flags.
> 
> Then, if you feel bored, you could convert sme_active, sev_active,
> sev_es_active, mem_encrypt_active and whetever else code needs to query
> any aspect of SEV being enabled or not, to that function.
> 

Ok.

> > +void __init check_kvm_sev_migration(void)
> > +{
> > +	if (sev_active() &&
> > +	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
> 
> Save an indentation level:
> 
> 	if (!sev_active() ||
> 	    !kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION))
> 		return;
> 
> > +		unsigned long nr_pages;
> > +		int i;
> > +
> > +		pr_info("KVM enable live migration\n");
> 
> That should be at the end of the function and say:
> 
> 		pr_info("KVM live migration enabled.\n");
> 
> > +		WRITE_ONCE(sev_live_migration_enabled, true);
> 
> Why WRITE_ONCE?
> 

Just to ensure that the sev_live_migration_enabled is set to TRUE before
it is used immediately next in the function.

Thanks,
Ashish

> And that needs to go to the end of the function too.
> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7CAshish.Kalra%40amd.com%7Cfe47697d718c4326b62108d904d3e9ad%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637546130496140162%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=d%2F%2Bx8t8R7zJclA7ENc%2Fxwt5%2FU13m%2FWObem2Hq8yH190%3D&amp;reserved=0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-21 15:22     ` Ashish Kalra
@ 2021-04-21 15:32       ` Borislav Petkov
  0 siblings, 0 replies; 43+ messages in thread
From: Borislav Petkov @ 2021-04-21 15:32 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: pbonzini, tglx, mingo, hpa, joro, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh,
	kexec

On Wed, Apr 21, 2021 at 03:22:20PM +0000, Ashish Kalra wrote:

> Yes, the above mentions to get KVM_FEATURE_CPUID and then check if live
> migration feature is supported, i.e.,
> kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION). The above comments
> are written more generically.

Do not write generic comments please - write exact comments to state
precisely why you're doing what you're doing.

> Just to ensure that the sev_live_migration_enabled is set to TRUE before
> it is used immediately next in the function.

Why wouldn't it be set to true by the time the next function runs?

Do you have any concrete observations where this is not the case?

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-21 14:44   ` Borislav Petkov
  2021-04-21 15:22     ` Ashish Kalra
@ 2021-04-21 15:38     ` Paolo Bonzini
  2021-04-21 18:48       ` Ashish Kalra
  1 sibling, 1 reply; 43+ messages in thread
From: Paolo Bonzini @ 2021-04-21 15:38 UTC (permalink / raw)
  To: Borislav Petkov, Ashish Kalra
  Cc: tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, srutherford, seanjc, venu.busireddy, brijesh.singh,
	kexec

On 21/04/21 16:44, Borislav Petkov wrote:
> On Thu, Apr 15, 2021 at 04:01:16PM +0000, Ashish Kalra wrote:
>> From: Ashish Kalra <ashish.kalra@amd.com>
>>
>> The guest support for detecting and enabling SEV Live migration
>> feature uses the following logic :
>>
>>   - kvm_init_plaform() invokes check_kvm_sev_migration() which
>>     checks if its booted under the EFI
>>
>>     - If not EFI,
>>
>>       i) check for the KVM_FEATURE_CPUID
> 
> Where do you do that?
> 
> $ git grep KVM_FEATURE_CPUID
> $
> 
> Do you mean
> 
> 	kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)
> 
> per chance?

Yep.  Or KVM_CPUID_FEATURES perhaps.

> 
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 78bb0fae3982..94ef16d263a7 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -26,6 +26,7 @@
>>   #include <linux/kprobes.h>
>>   #include <linux/nmi.h>
>>   #include <linux/swait.h>
>> +#include <linux/efi.h>
>>   #include <asm/timer.h>
>>   #include <asm/cpu.h>
>>   #include <asm/traps.h>
>> @@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
>>   	early_set_memory_decrypted((unsigned long) ptr, size);
>>   }
>>   
>> +static int __init setup_kvm_sev_migration(void)
> 
> kvm_init_sev_migration() or so.
> 
> ...
> 
>> @@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
>>   
>>   bool sev_enabled __section(".data");
>>   
>> +bool sev_live_migration_enabled __section(".data");
> 
> Pls add a function called something like:
> 
> bool sev_feature_enabled(enum sev_feature)
> 
> and gets SEV_FEATURE_LIVE_MIGRATION and then use it instead of adding
> yet another boolean which contains whether some aspect of SEV has been
> enabled or not.
> 
> Then add a
> 
> static enum sev_feature sev_features;
> 
> in mem_encrypt.c and that function above will query that sev_features
> enum for set flags.

Even better: let's stop callings things SEV/SEV_ES.  Long term we want 
anyway to use things like mem_encrypt_enabled (SEV), 
guest_instruction_trap_enabled (SEV/ES), etc.

For this one we don't need a bool at all, we can simply check whether 
the pvop points to paravirt_nop.  Also keep everything but the BSS 
handling in arch/x86/kernel/kvm.c.  Only the BSS handling should be in 
arch/x86/mm/mem_encrypt.c.  This way all KVM paravirt hypercalls and 
MSRs are in kvm.c.

That is:

void kvm_init_platform(void)
{
	if (sev_active() &&
	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
		pv_ops.mmu.notify_page_enc_status_changed =
			kvm_sev_hc_page_enc_status;
		/* this takes care of bss_decrypted */
		early_set_page_enc_status();
		if (!efi_enabled(EFI_BOOT))
			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
			       KVM_SEV_LIVE_MIGRATION_ENABLED);
	}
	/* existing kvm_init_platform code goes here */
}

// the pvop is changed to take the pfn, so that the vaddr loop
// is not KVM specific
static inline void notify_page_enc_status_changed(unsigned long pfn,
				int npages, bool enc)
{
	PVOP_VCALL3(mmu.page_encryption_changed, pfn, npages, enc);
}

static void notify_addr_enc_status_changed(unsigned long addr,
					   int numpages, bool enc)
{
#ifdef CONFIG_PARAVIRT
	if (pv_ops.mmu.notify_page_enc_status_changed == paravirt_nop)
		return;

	/* the body of set_memory_enc_dec_hypercall goes here */
	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
		...
		notify_page_enc_status_changed(pfn, psize >> PAGE_SHIFT,
					       enc);
		vaddr_next = (vaddr & pmask) + psize;
	}
#endif
}

static int __set_memory_enc_dec(unsigned long addr,
				int numpages, bool enc)
{
	...
  	cpa_flush(&cpa, 0);
	notify_addr_enc_status_changed(addr, numpages, enc);
  	return ret;
}


> +static int __init setup_kvm_sev_migration(void)

Please rename this to include efi in the function name.

> 
> +		 */
> +		if (!efi_enabled(EFI_BOOT))
> +			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> +			       KVM_SEV_LIVE_MIGRATION_ENABLED);
> +		} else {
> +			pr_info("KVM enable live migration feature unsupported\n");
> +		}
> +}

I think this pr_info is incorrect, because it can still be enabled in 
the late_initcall.  Just remove it as in the sketch above.

Paolo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-21 15:38     ` Paolo Bonzini
@ 2021-04-21 18:48       ` Ashish Kalra
  2021-04-21 19:19         ` Ashish Kalra
  0 siblings, 1 reply; 43+ messages in thread
From: Ashish Kalra @ 2021-04-21 18:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Borislav Petkov, tglx, mingo, hpa, joro, bp, thomas.lendacky,
	x86, kvm, linux-kernel, srutherford, seanjc, venu.busireddy,
	brijesh.singh, kexec

Hello Paolo,

The earlier patch#10 of SEV live migration patches which is now part of
the guest interface patches used to define
KVM_FEATURE_SEV_LIVE_MIGRATION. 

So now, will the guest patches need to define this feature ?

Thanks,
Ashish

On Wed, Apr 21, 2021 at 05:38:45PM +0200, Paolo Bonzini wrote:
> On 21/04/21 16:44, Borislav Petkov wrote:
> > On Thu, Apr 15, 2021 at 04:01:16PM +0000, Ashish Kalra wrote:
> > > From: Ashish Kalra <ashish.kalra@amd.com>
> > > 
> > > The guest support for detecting and enabling SEV Live migration
> > > feature uses the following logic :
> > > 
> > >   - kvm_init_plaform() invokes check_kvm_sev_migration() which
> > >     checks if its booted under the EFI
> > > 
> > >     - If not EFI,
> > > 
> > >       i) check for the KVM_FEATURE_CPUID
> > 
> > Where do you do that?
> > 
> > $ git grep KVM_FEATURE_CPUID
> > $
> > 
> > Do you mean
> > 
> > 	kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)
> > 
> > per chance?
> 
> Yep.  Or KVM_CPUID_FEATURES perhaps.
> 
> > 
> > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > > index 78bb0fae3982..94ef16d263a7 100644
> > > --- a/arch/x86/kernel/kvm.c
> > > +++ b/arch/x86/kernel/kvm.c
> > > @@ -26,6 +26,7 @@
> > >   #include <linux/kprobes.h>
> > >   #include <linux/nmi.h>
> > >   #include <linux/swait.h>
> > > +#include <linux/efi.h>
> > >   #include <asm/timer.h>
> > >   #include <asm/cpu.h>
> > >   #include <asm/traps.h>
> > > @@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
> > >   	early_set_memory_decrypted((unsigned long) ptr, size);
> > >   }
> > > +static int __init setup_kvm_sev_migration(void)
> > 
> > kvm_init_sev_migration() or so.
> > 
> > ...
> > 
> > > @@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
> > >   bool sev_enabled __section(".data");
> > > +bool sev_live_migration_enabled __section(".data");
> > 
> > Pls add a function called something like:
> > 
> > bool sev_feature_enabled(enum sev_feature)
> > 
> > and gets SEV_FEATURE_LIVE_MIGRATION and then use it instead of adding
> > yet another boolean which contains whether some aspect of SEV has been
> > enabled or not.
> > 
> > Then add a
> > 
> > static enum sev_feature sev_features;
> > 
> > in mem_encrypt.c and that function above will query that sev_features
> > enum for set flags.
> 
> Even better: let's stop callings things SEV/SEV_ES.  Long term we want
> anyway to use things like mem_encrypt_enabled (SEV),
> guest_instruction_trap_enabled (SEV/ES), etc.
> 
> For this one we don't need a bool at all, we can simply check whether the
> pvop points to paravirt_nop.  Also keep everything but the BSS handling in
> arch/x86/kernel/kvm.c.  Only the BSS handling should be in
> arch/x86/mm/mem_encrypt.c.  This way all KVM paravirt hypercalls and MSRs
> are in kvm.c.
> 
> That is:
> 
> void kvm_init_platform(void)
> {
> 	if (sev_active() &&
> 	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
> 		pv_ops.mmu.notify_page_enc_status_changed =
> 			kvm_sev_hc_page_enc_status;
> 		/* this takes care of bss_decrypted */
> 		early_set_page_enc_status();
> 		if (!efi_enabled(EFI_BOOT))
> 			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> 			       KVM_SEV_LIVE_MIGRATION_ENABLED);
> 	}
> 	/* existing kvm_init_platform code goes here */
> }
> 
> // the pvop is changed to take the pfn, so that the vaddr loop
> // is not KVM specific
> static inline void notify_page_enc_status_changed(unsigned long pfn,
> 				int npages, bool enc)
> {
> 	PVOP_VCALL3(mmu.page_encryption_changed, pfn, npages, enc);
> }
> 
> static void notify_addr_enc_status_changed(unsigned long addr,
> 					   int numpages, bool enc)
> {
> #ifdef CONFIG_PARAVIRT
> 	if (pv_ops.mmu.notify_page_enc_status_changed == paravirt_nop)
> 		return;
> 
> 	/* the body of set_memory_enc_dec_hypercall goes here */
> 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> 		...
> 		notify_page_enc_status_changed(pfn, psize >> PAGE_SHIFT,
> 					       enc);
> 		vaddr_next = (vaddr & pmask) + psize;
> 	}
> #endif
> }
> 
> static int __set_memory_enc_dec(unsigned long addr,
> 				int numpages, bool enc)
> {
> 	...
>  	cpa_flush(&cpa, 0);
> 	notify_addr_enc_status_changed(addr, numpages, enc);
>  	return ret;
> }
> 
> 
> > +static int __init setup_kvm_sev_migration(void)
> 
> Please rename this to include efi in the function name.
> 
> > 
> > +		 */
> > +		if (!efi_enabled(EFI_BOOT))
> > +			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> > +			       KVM_SEV_LIVE_MIGRATION_ENABLED);
> > +		} else {
> > +			pr_info("KVM enable live migration feature unsupported\n");
> > +		}
> > +}
> 
> I think this pr_info is incorrect, because it can still be enabled in the
> late_initcall.  Just remove it as in the sketch above.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-04-21 18:48       ` Ashish Kalra
@ 2021-04-21 19:19         ` Ashish Kalra
  0 siblings, 0 replies; 43+ messages in thread
From: Ashish Kalra @ 2021-04-21 19:19 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Borislav Petkov, tglx, mingo, hpa, joro, bp, thomas.lendacky,
	x86, kvm, linux-kernel, srutherford, seanjc, venu.busireddy,
	brijesh.singh, kexec

To reiterate, in addition to KVM_FEATURE_HC_PAGE_ENC_STATUS, we also need 
to add the new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
for host-side support for SEV live migration. 

Or will the guest now check KVM_FEATURE_HC_PAGE_ENC_STATUS in CPUID and
then accordingly set bit0 in MSR_KVM_MIGRATION_CONTROL to enable SEV
live migration ?

Thanks,
Ashish

On Wed, Apr 21, 2021 at 06:48:32PM +0000, Ashish Kalra wrote:
> Hello Paolo,
> 
> The earlier patch#10 of SEV live migration patches which is now part of
> the guest interface patches used to define
> KVM_FEATURE_SEV_LIVE_MIGRATION. 
> 
> So now, will the guest patches need to define this feature ?
> 
> Thanks,
> Ashish
> 
> On Wed, Apr 21, 2021 at 05:38:45PM +0200, Paolo Bonzini wrote:
> > On 21/04/21 16:44, Borislav Petkov wrote:
> > > On Thu, Apr 15, 2021 at 04:01:16PM +0000, Ashish Kalra wrote:
> > > > From: Ashish Kalra <ashish.kalra@amd.com>
> > > > 
> > > > The guest support for detecting and enabling SEV Live migration
> > > > feature uses the following logic :
> > > > 
> > > >   - kvm_init_plaform() invokes check_kvm_sev_migration() which
> > > >     checks if its booted under the EFI
> > > > 
> > > >     - If not EFI,
> > > > 
> > > >       i) check for the KVM_FEATURE_CPUID
> > > 
> > > Where do you do that?
> > > 
> > > $ git grep KVM_FEATURE_CPUID
> > > $
> > > 
> > > Do you mean
> > > 
> > > 	kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)
> > > 
> > > per chance?
> > 
> > Yep.  Or KVM_CPUID_FEATURES perhaps.
> > 
> > > 
> > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > > > index 78bb0fae3982..94ef16d263a7 100644
> > > > --- a/arch/x86/kernel/kvm.c
> > > > +++ b/arch/x86/kernel/kvm.c
> > > > @@ -26,6 +26,7 @@
> > > >   #include <linux/kprobes.h>
> > > >   #include <linux/nmi.h>
> > > >   #include <linux/swait.h>
> > > > +#include <linux/efi.h>
> > > >   #include <asm/timer.h>
> > > >   #include <asm/cpu.h>
> > > >   #include <asm/traps.h>
> > > > @@ -429,6 +430,59 @@ static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
> > > >   	early_set_memory_decrypted((unsigned long) ptr, size);
> > > >   }
> > > > +static int __init setup_kvm_sev_migration(void)
> > > 
> > > kvm_init_sev_migration() or so.
> > > 
> > > ...
> > > 
> > > > @@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(sev_enable_key);
> > > >   bool sev_enabled __section(".data");
> > > > +bool sev_live_migration_enabled __section(".data");
> > > 
> > > Pls add a function called something like:
> > > 
> > > bool sev_feature_enabled(enum sev_feature)
> > > 
> > > and gets SEV_FEATURE_LIVE_MIGRATION and then use it instead of adding
> > > yet another boolean which contains whether some aspect of SEV has been
> > > enabled or not.
> > > 
> > > Then add a
> > > 
> > > static enum sev_feature sev_features;
> > > 
> > > in mem_encrypt.c and that function above will query that sev_features
> > > enum for set flags.
> > 
> > Even better: let's stop callings things SEV/SEV_ES.  Long term we want
> > anyway to use things like mem_encrypt_enabled (SEV),
> > guest_instruction_trap_enabled (SEV/ES), etc.
> > 
> > For this one we don't need a bool at all, we can simply check whether the
> > pvop points to paravirt_nop.  Also keep everything but the BSS handling in
> > arch/x86/kernel/kvm.c.  Only the BSS handling should be in
> > arch/x86/mm/mem_encrypt.c.  This way all KVM paravirt hypercalls and MSRs
> > are in kvm.c.
> > 
> > That is:
> > 
> > void kvm_init_platform(void)
> > {
> > 	if (sev_active() &&
> > 	    kvm_para_has_feature(KVM_FEATURE_SEV_LIVE_MIGRATION)) {
> > 		pv_ops.mmu.notify_page_enc_status_changed =
> > 			kvm_sev_hc_page_enc_status;
> > 		/* this takes care of bss_decrypted */
> > 		early_set_page_enc_status();
> > 		if (!efi_enabled(EFI_BOOT))
> > 			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> > 			       KVM_SEV_LIVE_MIGRATION_ENABLED);
> > 	}
> > 	/* existing kvm_init_platform code goes here */
> > }
> > 
> > // the pvop is changed to take the pfn, so that the vaddr loop
> > // is not KVM specific
> > static inline void notify_page_enc_status_changed(unsigned long pfn,
> > 				int npages, bool enc)
> > {
> > 	PVOP_VCALL3(mmu.page_encryption_changed, pfn, npages, enc);
> > }
> > 
> > static void notify_addr_enc_status_changed(unsigned long addr,
> > 					   int numpages, bool enc)
> > {
> > #ifdef CONFIG_PARAVIRT
> > 	if (pv_ops.mmu.notify_page_enc_status_changed == paravirt_nop)
> > 		return;
> > 
> > 	/* the body of set_memory_enc_dec_hypercall goes here */
> > 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> > 		...
> > 		notify_page_enc_status_changed(pfn, psize >> PAGE_SHIFT,
> > 					       enc);
> > 		vaddr_next = (vaddr & pmask) + psize;
> > 	}
> > #endif
> > }
> > 
> > static int __set_memory_enc_dec(unsigned long addr,
> > 				int numpages, bool enc)
> > {
> > 	...
> >  	cpa_flush(&cpa, 0);
> > 	notify_addr_enc_status_changed(addr, numpages, enc);
> >  	return ret;
> > }
> > 
> > 
> > > +static int __init setup_kvm_sev_migration(void)
> > 
> > Please rename this to include efi in the function name.
> > 
> > > 
> > > +		 */
> > > +		if (!efi_enabled(EFI_BOOT))
> > > +			wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> > > +			       KVM_SEV_LIVE_MIGRATION_ENABLED);
> > > +		} else {
> > > +			pr_info("KVM enable live migration feature unsupported\n");
> > > +		}
> > > +}
> > 
> > I think this pr_info is incorrect, because it can still be enabled in the
> > late_initcall.  Just remove it as in the sketch above.
> > 
> > Paolo
> > 

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2021-04-21 19:19 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-15 15:52 [PATCH v13 00/12] Add AMD SEV guest live migration support Ashish Kalra
2021-04-15 15:53 ` [PATCH v13 01/12] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
2021-04-20  8:50   ` Paolo Bonzini
2021-04-15 15:53 ` [PATCH v13 02/12] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
2021-04-15 15:54 ` [PATCH v13 03/12] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
2021-04-15 15:54 ` [PATCH v13 04/12] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
2021-04-20  8:38   ` Paolo Bonzini
2021-04-20  9:18     ` Paolo Bonzini
2021-04-15 15:55 ` [PATCH v13 05/12] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
2021-04-20  8:40   ` Paolo Bonzini
2021-04-20  8:43     ` Paolo Bonzini
2021-04-15 15:55 ` [PATCH v13 06/12] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
2021-04-15 15:56 ` [PATCH v13 07/12] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2021-04-15 15:57 ` [PATCH v13 08/12] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
2021-04-20 11:10   ` Paolo Bonzini
2021-04-20 17:24     ` Sean Christopherson
2021-04-15 15:57 ` [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2021-04-20  9:39   ` Paolo Bonzini
2021-04-21 10:05   ` Borislav Petkov
2021-04-21 12:00     ` Paolo Bonzini
2021-04-21 14:09       ` Borislav Petkov
2021-04-21 12:12     ` Ashish Kalra
2021-04-21 13:50       ` Brijesh Singh
2021-04-21 13:52       ` Borislav Petkov
2021-04-15 15:58 ` [PATCH v13 10/12] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
2021-04-19 23:06   ` Sean Christopherson
2021-04-20 10:49     ` Paolo Bonzini
2021-04-20  9:47   ` Paolo Bonzini
2021-04-15 15:58 ` [PATCH v13 11/12] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
2021-04-15 16:01 ` [PATCH v13 12/12] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
2021-04-20 10:52   ` Paolo Bonzini
2021-04-21 14:44   ` Borislav Petkov
2021-04-21 15:22     ` Ashish Kalra
2021-04-21 15:32       ` Borislav Petkov
2021-04-21 15:38     ` Paolo Bonzini
2021-04-21 18:48       ` Ashish Kalra
2021-04-21 19:19         ` Ashish Kalra
2021-04-16 21:43 ` [PATCH v13 00/12] Add AMD SEV guest live migration support Steve Rutherford
2021-04-19 14:40   ` Ashish Kalra
2021-04-20 11:11 ` Paolo Bonzini
2021-04-20 18:51   ` Borislav Petkov
2021-04-20 19:08     ` Paolo Bonzini
2021-04-20 20:28       ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).