linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration.
@ 2021-08-24 11:03 Ashish Kalra
  2021-08-24 11:04 ` [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3 Ashish Kalra
                   ` (5 more replies)
  0 siblings, 6 replies; 23+ messages in thread
From: Ashish Kalra @ 2021-08-24 11:03 UTC (permalink / raw)
  To: pbonzini
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert

From: Ashish Kalra <ashish.kalra@amd.com>

The series adds guest api and guest kernel support for SEV live migration.

The patch series introduces a new hypercall. The guest OS can use this
hypercall to notify the page encryption status. If the page is encrypted
with guest specific-key then we use SEV command during the migration.
If page is not encrypted then fallback to default. This new hypercall
is invoked using paravirt_ops.

This section descibes how the SEV live migration feature is negotiated
between the host and guest, the host indicates this feature support via 
KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
sets a UEFI enviroment variable indicating OVMF support for live
migration, the guest kernel also detects the host support for this
feature via cpuid and in case of an EFI boot verifies if OVMF also
supports this feature by getting the UEFI enviroment variable and if it
set then enables live migration feature on host by writing to a custom
MSR, if not booted under EFI, then it simply enables the feature by
again writing to the custom MSR.

Changes since v5:
 - Add detailed comments and explanation why SEV hypercalls need to
   be made before apply_alternative() and how adding kvm_sev_hypercall3()
   function is abstracted using the early_set_memory_XX() interfaces
   invoking the paravirt_ops (pv_ops).
 - Reverting to the earlier kvm_sev_hypercall3() interface after
   feedback from Sean that inversion of KVM_HYPERCALL to VMMCALL
   is causing issues and not going to work.

Changes since v4:
 - Split the guest kernel support for SEV live migration and kexec support
   for live migration into separate patches.

Changes since v3:
 - Add Code style fixes as per review from Boris.

Changes since v2:
 - Add guest api patch to this patchset.
 - Replace KVM_HC_PAGE_ENC_STATUS hypercall with the more generic
   KVM_HC_MAP_GPA_RANGE hypercall.
 - Add WARN_ONCE() messages if address lookup fails during kernel
   page table walk while issuing KVM_HC_MAP_GPA_RANGE hypercall.

Changes since v1:
 - Avoid having an SEV specific variant of kvm_hypercall3() and instead
   invert the default to VMMCALL.

Ashish Kalra (4):
  KVM: X86: Introduce KVM_HC_MAP_GPA_RANGE hypercall
  KVM: x86: invert KVM_HYPERCALL to default to VMMCALL
  EFI: Introduce the new AMD Memory Encryption GUID.
  x86/kvm: Add guest support for detecting and enabling SEV Live
    Migration feature.

Ashish Kalra (3):
  EFI: Introduce the new AMD Memory Encryption GUID.
  x86/kvm: Add guest support for detecting and enabling SEV Live
    Migration feature.
  x86/kvm: Add kexec support for SEV Live Migration.

Brijesh Singh (2):
  x86/kvm: Add AMD SEV specific Hypercall3
  mm: x86: Invoke hypercall when page encryption status is changed

 arch/x86/include/asm/kvm_para.h       |  12 +++
 arch/x86/include/asm/mem_encrypt.h    |   4 +
 arch/x86/include/asm/paravirt.h       |   6 ++
 arch/x86/include/asm/paravirt_types.h |   1 +
 arch/x86/include/asm/set_memory.h     |   1 +
 arch/x86/kernel/kvm.c                 | 107 ++++++++++++++++++++++++++
 arch/x86/kernel/paravirt.c            |   1 +
 arch/x86/mm/mem_encrypt.c             |  72 ++++++++++++++---
 arch/x86/mm/pat/set_memory.c          |   6 ++
 include/linux/efi.h                   |   1 +
 10 files changed, 202 insertions(+), 9 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
@ 2021-08-24 11:04 ` Ashish Kalra
  2021-09-16  1:15   ` Steve Rutherford
  2021-08-24 11:05 ` [PATCH v6 2/5] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 23+ messages in thread
From: Ashish Kalra @ 2021-08-24 11:04 UTC (permalink / raw)
  To: pbonzini
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert

From: Brijesh Singh <brijesh.singh@amd.com>

KVM hypercall framework relies on alternative framework to patch the
VMCALL -> VMMCALL on AMD platform. If a hypercall is made before
apply_alternative() is called then it defaults to VMCALL. The approach
works fine on non SEV guest. A VMCALL would causes #UD, and hypervisor
will be able to decode the instruction and do the right things. But
when SEV is active, guest memory is encrypted with guest key and
hypervisor will not be able to decode the instruction bytes.

To highlight the need to provide this interface, capturing the
flow of apply_alternatives() :
setup_arch() call init_hypervisor_platform() which detects
the hypervisor platform the kernel is running under and then the
hypervisor specific initialization code can make early hypercalls.
For example, KVM specific initialization in case of SEV will try
to mark the "__bss_decrypted" section's encryption state via early
page encryption status hypercalls.

Now, apply_alternatives() is called much later when setup_arch()
calls check_bugs(), so we do need some kind of an early,
pre-alternatives hypercall interface. Other cases of pre-alternatives
hypercalls include marking per-cpu GHCB pages as decrypted on SEV-ES
and per-cpu apf_reason, steal_time and kvm_apic_eoi as decrypted for
SEV generally.

Add SEV specific hypercall3, it unconditionally uses VMMCALL. The hypercall
will be used by the SEV guest to notify encrypted pages to the hypervisor.

This kvm_sev_hypercall3() function is abstracted and used as follows :
All these early hypercalls are made through early_set_memory_XX() interfaces,
which in turn invoke pv_ops (paravirt_ops).

This early_set_memory_XX() -> pv_ops.mmu.notify_page_enc_status_changed()
is a generic interface and can easily have SEV, TDX and any other
future platform specific abstractions added to it.

Currently, pv_ops.mmu.notify_page_enc_status_changed() callback is setup to
invoke kvm_sev_hypercall3() in case of SEV.

Similarly, in case of TDX, pv_ops.mmu.notify_page_enc_status_changed()
can be setup to a TDX specific callback.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 arch/x86/include/asm/kvm_para.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 69299878b200..56935ebb1dfe 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -83,6 +83,18 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
 	return ret;
 }
 
+static inline long kvm_sev_hypercall3(unsigned int nr, unsigned long p1,
+				      unsigned long p2, unsigned long p3)
+{
+	long ret;
+
+	asm volatile("vmmcall"
+		     : "=a"(ret)
+		     : "a"(nr), "b"(p1), "c"(p2), "d"(p3)
+		     : "memory");
+	return ret;
+}
+
 #ifdef CONFIG_KVM_GUEST
 void kvmclock_init(void);
 void kvmclock_disable(void);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 2/5] mm: x86: Invoke hypercall when page encryption status is changed
  2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
  2021-08-24 11:04 ` [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3 Ashish Kalra
@ 2021-08-24 11:05 ` Ashish Kalra
  2021-08-24 11:06 ` [PATCH v6 3/5] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 23+ messages in thread
From: Ashish Kalra @ 2021-08-24 11:05 UTC (permalink / raw)
  To: pbonzini
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert

From: Brijesh Singh <brijesh.singh@amd.com>

Invoke a hypercall when a memory region is changed from encrypted ->
decrypted and vice versa. Hypervisor needs to know the page encryption
status during the guest migration.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steve Rutherford <srutherford@google.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/paravirt.h       |  6 +++
 arch/x86/include/asm/paravirt_types.h |  1 +
 arch/x86/include/asm/set_memory.h     |  1 +
 arch/x86/kernel/paravirt.c            |  1 +
 arch/x86/mm/mem_encrypt.c             | 67 +++++++++++++++++++++++----
 arch/x86/mm/pat/set_memory.c          |  6 +++
 6 files changed, 73 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index da3a1ac82be5..540bf8cb37db 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -97,6 +97,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
 	PVOP_VCALL1(mmu.exit_mmap, mm);
 }
 
+static inline void notify_page_enc_status_changed(unsigned long pfn,
+						  int npages, bool enc)
+{
+	PVOP_VCALL3(mmu.notify_page_enc_status_changed, pfn, npages, enc);
+}
+
 #ifdef CONFIG_PARAVIRT_XXL
 static inline void load_sp0(unsigned long sp0)
 {
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index d9d6b0203ec4..664199820239 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -168,6 +168,7 @@ struct pv_mmu_ops {
 
 	/* Hook for intercepting the destruction of an mm_struct. */
 	void (*exit_mmap)(struct mm_struct *mm);
+	void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, bool enc);
 
 #ifdef CONFIG_PARAVIRT_XXL
 	struct paravirt_callee_save read_cr2;
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 43fa081a1adb..872617542bbc 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -83,6 +83,7 @@ int set_pages_rw(struct page *page, int numpages);
 int set_direct_map_invalid_noflush(struct page *page);
 int set_direct_map_default_noflush(struct page *page);
 bool kernel_page_present(struct page *page);
+void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc);
 
 extern int kernel_set_to_readonly;
 
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 04cafc057bed..1cc20ac9a54f 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -296,6 +296,7 @@ struct paravirt_patch_template pv_ops = {
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
 
 	.mmu.exit_mmap		= paravirt_nop,
+	.mmu.notify_page_enc_status_changed	= paravirt_nop,
 
 #ifdef CONFIG_PARAVIRT_XXL
 	.mmu.read_cr2		= __PV_IS_CALLEE_SAVE(native_read_cr2),
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ff08dc463634..455ac487cb9d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -228,29 +228,76 @@ void __init sev_setup_arch(void)
 	swiotlb_adjust_size(size);
 }
 
-static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
+static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
 {
-	pgprot_t old_prot, new_prot;
-	unsigned long pfn, pa, size;
-	pte_t new_pte;
+	unsigned long pfn = 0;
+	pgprot_t prot;
 
 	switch (level) {
 	case PG_LEVEL_4K:
 		pfn = pte_pfn(*kpte);
-		old_prot = pte_pgprot(*kpte);
+		prot = pte_pgprot(*kpte);
 		break;
 	case PG_LEVEL_2M:
 		pfn = pmd_pfn(*(pmd_t *)kpte);
-		old_prot = pmd_pgprot(*(pmd_t *)kpte);
+		prot = pmd_pgprot(*(pmd_t *)kpte);
 		break;
 	case PG_LEVEL_1G:
 		pfn = pud_pfn(*(pud_t *)kpte);
-		old_prot = pud_pgprot(*(pud_t *)kpte);
+		prot = pud_pgprot(*(pud_t *)kpte);
 		break;
 	default:
-		return;
+		WARN_ONCE(1, "Invalid level for kpte\n");
+		return 0;
 	}
 
+	if (ret_prot)
+		*ret_prot = prot;
+
+	return pfn;
+}
+
+void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc)
+{
+#ifdef CONFIG_PARAVIRT
+	unsigned long sz = npages << PAGE_SHIFT;
+	unsigned long vaddr_end = vaddr + sz;
+
+	while (vaddr < vaddr_end) {
+		int psize, pmask, level;
+		unsigned long pfn;
+		pte_t *kpte;
+
+		kpte = lookup_address(vaddr, &level);
+		if (!kpte || pte_none(*kpte)) {
+			WARN_ONCE(1, "kpte lookup for vaddr\n");
+			return;
+		}
+
+		pfn = pg_level_to_pfn(level, kpte, NULL);
+		if (!pfn)
+			continue;
+
+		psize = page_level_size(level);
+		pmask = page_level_mask(level);
+
+		notify_page_enc_status_changed(pfn, psize >> PAGE_SHIFT, enc);
+
+		vaddr = (vaddr & pmask) + psize;
+	}
+#endif
+}
+
+static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
+{
+	pgprot_t old_prot, new_prot;
+	unsigned long pfn, pa, size;
+	pte_t new_pte;
+
+	pfn = pg_level_to_pfn(level, kpte, &old_prot);
+	if (!pfn)
+		return;
+
 	new_prot = old_prot;
 	if (enc)
 		pgprot_val(new_prot) |= _PAGE_ENC;
@@ -285,12 +332,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 static int __init early_set_memory_enc_dec(unsigned long vaddr,
 					   unsigned long size, bool enc)
 {
-	unsigned long vaddr_end, vaddr_next;
+	unsigned long vaddr_end, vaddr_next, start;
 	unsigned long psize, pmask;
 	int split_page_size_mask;
 	int level, ret;
 	pte_t *kpte;
 
+	start = vaddr;
 	vaddr_next = vaddr;
 	vaddr_end = vaddr + size;
 
@@ -345,6 +393,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
 
 	ret = 0;
 
+	notify_range_enc_status_changed(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc);
 out:
 	__flush_tlb_all();
 	return ret;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ad8a5c586a35..4f0cd505f924 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2020,6 +2020,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
 	 */
 	cpa_flush(&cpa, 0);
 
+	/*
+	 * Notify hypervisor that a given memory range is mapped encrypted
+	 * or decrypted.
+	 */
+	notify_range_enc_status_changed(addr, numpages, enc);
+
 	return ret;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 3/5] EFI: Introduce the new AMD Memory Encryption GUID.
  2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
  2021-08-24 11:04 ` [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3 Ashish Kalra
  2021-08-24 11:05 ` [PATCH v6 2/5] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
@ 2021-08-24 11:06 ` Ashish Kalra
  2021-08-24 11:07 ` [PATCH v6 4/5] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 23+ messages in thread
From: Ashish Kalra @ 2021-08-24 11:06 UTC (permalink / raw)
  To: pbonzini
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert, linux-efi

From: Ashish Kalra <ashish.kalra@amd.com>

Introduce a new AMD Memory Encryption GUID which is currently
used for defining a new UEFI environment variable which indicates
UEFI/OVMF support for the SEV live migration feature. This variable
is setup when UEFI/OVMF detects host/hypervisor support for SEV
live migration and later this variable is read by the kernel using
EFI runtime services to verify if OVMF supports the live migration
feature.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
---
 include/linux/efi.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/efi.h b/include/linux/efi.h
index 6b5d36babfcc..dbd39b20e034 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -362,6 +362,7 @@ void efi_native_runtime_setup(void);
 
 /* OEM GUIDs */
 #define DELLEMC_EFI_RCI2_TABLE_GUID		EFI_GUID(0x2d9f28a2, 0xa886, 0x456a,  0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55)
+#define AMD_SEV_MEM_ENCRYPT_GUID		EFI_GUID(0x0cf29b71, 0x9e51, 0x433a,  0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75)
 
 typedef struct {
 	efi_guid_t guid;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 4/5] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature.
  2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
                   ` (2 preceding siblings ...)
  2021-08-24 11:06 ` [PATCH v6 3/5] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
@ 2021-08-24 11:07 ` Ashish Kalra
  2021-08-24 11:07 ` [PATCH v6 5/5] x86/kvm: Add kexec support for SEV Live Migration Ashish Kalra
  2021-11-11 12:43 ` [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Paolo Bonzini
  5 siblings, 0 replies; 23+ messages in thread
From: Ashish Kalra @ 2021-08-24 11:07 UTC (permalink / raw)
  To: pbonzini
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert

From: Ashish Kalra <ashish.kalra@amd.com>

The guest support for detecting and enabling SEV Live migration
feature uses the following logic :

 - kvm_init_plaform() checks if its booted under the EFI

   - If not EFI,

     i) if kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL), issue a wrmsrl()
         to enable the SEV live migration support

   - If EFI,

     i) If kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL), read
        the UEFI variable which indicates OVMF support for live migration

     ii) the variable indicates live migration is supported, issue a wrmsrl() to
          enable the SEV live migration support

The EFI live migration check is done using a late_initcall() callback.

Also, ensure that _bss_decrypted section is marked as decrypted in the
hypervisor's guest page encryption status tracking.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Reviewed-by: Steve Rutherford <srutherford@google.com>
---
 arch/x86/include/asm/mem_encrypt.h |  4 ++
 arch/x86/kernel/kvm.c              | 82 ++++++++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c          |  5 ++
 3 files changed, 91 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 9c80c68d75b5..8dd373cc8b66 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -43,6 +43,8 @@ void __init sme_enable(struct boot_params *bp);
 
 int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
 int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
+void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
+					    bool enc);
 
 void __init mem_encrypt_free_decrypted_mem(void);
 
@@ -83,6 +85,8 @@ static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
 static inline int __init
 early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
+static inline void __init
+early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
 
 static inline void mem_encrypt_free_decrypted_mem(void) { }
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a26643dc6bd6..7d36b98b567d 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -27,6 +27,7 @@
 #include <linux/nmi.h>
 #include <linux/swait.h>
 #include <linux/syscore_ops.h>
+#include <linux/efi.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -40,6 +41,7 @@
 #include <asm/ptrace.h>
 #include <asm/reboot.h>
 #include <asm/svm.h>
+#include <asm/e820/api.h>
 
 DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
 
@@ -433,6 +435,8 @@ static void kvm_guest_cpu_offline(bool shutdown)
 	kvm_disable_steal_time();
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
 		wrmsrl(MSR_KVM_PV_EOI_EN, 0);
+	if (kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL))
+		wrmsrl(MSR_KVM_MIGRATION_CONTROL, 0);
 	kvm_pv_disable_apf();
 	if (!shutdown)
 		apf_task_wake_all();
@@ -547,6 +551,55 @@ static void kvm_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
 	__send_ipi_mask(local_mask, vector);
 }
 
+static int __init setup_efi_kvm_sev_migration(void)
+{
+	efi_char16_t efi_sev_live_migration_enabled[] = L"SevLiveMigrationEnabled";
+	efi_guid_t efi_variable_guid = AMD_SEV_MEM_ENCRYPT_GUID;
+	efi_status_t status;
+	unsigned long size;
+	bool enabled;
+
+	if (!sev_active() ||
+	    !kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL))
+		return 0;
+
+	if (!efi_enabled(EFI_BOOT))
+		return 0;
+
+	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
+		pr_info("%s : EFI runtime services are not enabled\n", __func__);
+		return 0;
+	}
+
+	size = sizeof(enabled);
+
+	/* Get variable contents into buffer */
+	status = efi.get_variable(efi_sev_live_migration_enabled,
+				  &efi_variable_guid, NULL, &size, &enabled);
+
+	if (status == EFI_NOT_FOUND) {
+		pr_info("%s : EFI live migration variable not found\n", __func__);
+		return 0;
+	}
+
+	if (status != EFI_SUCCESS) {
+		pr_info("%s : EFI variable retrieval failed\n", __func__);
+		return 0;
+	}
+
+	if (enabled == 0) {
+		pr_info("%s: live migration disabled in EFI\n", __func__);
+		return 0;
+	}
+
+	pr_info("%s : live migration enabled in EFI\n", __func__);
+	wrmsrl(MSR_KVM_MIGRATION_CONTROL, KVM_MIGRATION_READY);
+
+	return 1;
+}
+
+late_initcall(setup_efi_kvm_sev_migration);
+
 /*
  * Set the IPI entry points
  */
@@ -805,8 +858,37 @@ static bool __init kvm_msi_ext_dest_id(void)
 	return kvm_para_has_feature(KVM_FEATURE_MSI_EXT_DEST_ID);
 }
 
+static void kvm_sev_hc_page_enc_status(unsigned long pfn, int npages, bool enc)
+{
+	kvm_sev_hypercall3(KVM_HC_MAP_GPA_RANGE, pfn << PAGE_SHIFT, npages,
+			   KVM_MAP_GPA_RANGE_ENC_STAT(enc) | KVM_MAP_GPA_RANGE_PAGE_SZ_4K);
+}
+
 static void __init kvm_init_platform(void)
 {
+	if (sev_active() &&
+	    kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL)) {
+		unsigned long nr_pages;
+
+		pv_ops.mmu.notify_page_enc_status_changed =
+			kvm_sev_hc_page_enc_status;
+
+		/*
+		 * Ensure that _bss_decrypted section is marked as decrypted in the
+		 * shared pages list.
+		 */
+		nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted,
+					PAGE_SIZE);
+		early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted,
+						nr_pages, 0);
+
+		/*
+		 * If not booted using EFI, enable Live migration support.
+		 */
+		if (!efi_enabled(EFI_BOOT))
+			wrmsrl(MSR_KVM_MIGRATION_CONTROL,
+			       KVM_MIGRATION_READY);
+	}
 	kvmclock_init();
 	x86_platform.apic_post_init = kvm_apic_init;
 }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 455ac487cb9d..2673a89d17d9 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -409,6 +409,11 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
 	return early_set_memory_enc_dec(vaddr, size, true);
 }
 
+void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
+{
+	notify_range_enc_status_changed(vaddr, npages, enc);
+}
+
 /*
  * SME and SEV are very similar but they are not the same, so there are
  * times that the kernel will need to distinguish between SME and SEV. The
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 5/5] x86/kvm: Add kexec support for SEV Live Migration.
  2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
                   ` (3 preceding siblings ...)
  2021-08-24 11:07 ` [PATCH v6 4/5] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
@ 2021-08-24 11:07 ` Ashish Kalra
  2021-11-11 12:43 ` [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Paolo Bonzini
  5 siblings, 0 replies; 23+ messages in thread
From: Ashish Kalra @ 2021-08-24 11:07 UTC (permalink / raw)
  To: pbonzini
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert, kexec

From: Ashish Kalra <ashish.kalra@amd.com>

Reset the host's shared pages list related to kernel
specific page encryption status settings before we load a
new kernel by kexec. We cannot reset the complete
shared pages list here as we need to retain the
UEFI/OVMF firmware specific settings.

The host's shared pages list is maintained for the
guest to keep track of all unencrypted guest memory regions,
therefore we need to explicitly mark all shared pages as
encrypted again before rebooting into the new guest kernel.

Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Reviewed-by: Steve Rutherford <srutherford@google.com>
---
 arch/x86/kernel/kvm.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7d36b98b567d..025d25efd7e6 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -869,10 +869,35 @@ static void __init kvm_init_platform(void)
 	if (sev_active() &&
 	    kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL)) {
 		unsigned long nr_pages;
+		int i;
 
 		pv_ops.mmu.notify_page_enc_status_changed =
 			kvm_sev_hc_page_enc_status;
 
+		/*
+		 * Reset the host's shared pages list related to kernel
+		 * specific page encryption status settings before we load a
+		 * new kernel by kexec. Reset the page encryption status
+		 * during early boot intead of just before kexec to avoid SMP
+		 * races during kvm_pv_guest_cpu_reboot().
+		 * NOTE: We cannot reset the complete shared pages list
+		 * here as we need to retain the UEFI/OVMF firmware
+		 * specific settings.
+		 */
+
+		for (i = 0; i < e820_table->nr_entries; i++) {
+			struct e820_entry *entry = &e820_table->entries[i];
+
+			if (entry->type != E820_TYPE_RAM)
+				continue;
+
+			nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
+
+			kvm_sev_hypercall3(KVM_HC_MAP_GPA_RANGE, entry->addr,
+				       nr_pages,
+				       KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K);
+		}
+
 		/*
 		 * Ensure that _bss_decrypted section is marked as decrypted in the
 		 * shared pages list.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-08-24 11:04 ` [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3 Ashish Kalra
@ 2021-09-16  1:15   ` Steve Rutherford
  2021-09-20 16:07     ` Sean Christopherson
  0 siblings, 1 reply; 23+ messages in thread
From: Steve Rutherford @ 2021-09-16  1:15 UTC (permalink / raw)
  To: Ashish Kalra, seanjc
  Cc: pbonzini, tglx, mingo, hpa, joro, bp, thomas.lendacky, x86, kvm,
	linux-kernel, brijesh.singh, dovmurik, tobin, jejb, dgilbert

On Tue, Aug 24, 2021 at 4:04 AM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Brijesh Singh <brijesh.singh@amd.com>
>
> KVM hypercall framework relies on alternative framework to patch the
> VMCALL -> VMMCALL on AMD platform. If a hypercall is made before
> apply_alternative() is called then it defaults to VMCALL. The approach
> works fine on non SEV guest. A VMCALL would causes #UD, and hypervisor
> will be able to decode the instruction and do the right things. But
> when SEV is active, guest memory is encrypted with guest key and
> hypervisor will not be able to decode the instruction bytes.
>
> To highlight the need to provide this interface, capturing the
> flow of apply_alternatives() :
> setup_arch() call init_hypervisor_platform() which detects
> the hypervisor platform the kernel is running under and then the
> hypervisor specific initialization code can make early hypercalls.
> For example, KVM specific initialization in case of SEV will try
> to mark the "__bss_decrypted" section's encryption state via early
> page encryption status hypercalls.
>
> Now, apply_alternatives() is called much later when setup_arch()
> calls check_bugs(), so we do need some kind of an early,
> pre-alternatives hypercall interface. Other cases of pre-alternatives
> hypercalls include marking per-cpu GHCB pages as decrypted on SEV-ES
> and per-cpu apf_reason, steal_time and kvm_apic_eoi as decrypted for
> SEV generally.
>
> Add SEV specific hypercall3, it unconditionally uses VMMCALL. The hypercall
> will be used by the SEV guest to notify encrypted pages to the hypervisor.
>
> This kvm_sev_hypercall3() function is abstracted and used as follows :
> All these early hypercalls are made through early_set_memory_XX() interfaces,
> which in turn invoke pv_ops (paravirt_ops).
>
> This early_set_memory_XX() -> pv_ops.mmu.notify_page_enc_status_changed()
> is a generic interface and can easily have SEV, TDX and any other
> future platform specific abstractions added to it.
>
> Currently, pv_ops.mmu.notify_page_enc_status_changed() callback is setup to
> invoke kvm_sev_hypercall3() in case of SEV.
>
> Similarly, in case of TDX, pv_ops.mmu.notify_page_enc_status_changed()
> can be setup to a TDX specific callback.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford <srutherford@google.com>
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  arch/x86/include/asm/kvm_para.h | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
> index 69299878b200..56935ebb1dfe 100644
> --- a/arch/x86/include/asm/kvm_para.h
> +++ b/arch/x86/include/asm/kvm_para.h
> @@ -83,6 +83,18 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
>         return ret;
>  }
>
> +static inline long kvm_sev_hypercall3(unsigned int nr, unsigned long p1,
> +                                     unsigned long p2, unsigned long p3)
> +{
> +       long ret;
> +
> +       asm volatile("vmmcall"
> +                    : "=a"(ret)
> +                    : "a"(nr), "b"(p1), "c"(p2), "d"(p3)
> +                    : "memory");
> +       return ret;
> +}
> +
>  #ifdef CONFIG_KVM_GUEST
>  void kvmclock_init(void);
>  void kvmclock_disable(void);
> --
> 2.17.1
>

Looking at these threads, this patch either:
1) Needs review/approval from a maintainer that is interested or
2) Should flip back to using alternative (as suggested by Sean). In
particular: `ALTERNATIVE("vmmcall", "vmcall",
ALT_NOT(X86_FEATURE_VMMCALL))`. My understanding is that the advantage
of this is that (after calling apply alternatives) you get exactly the
same behavior as before. But before apply alternatives, you get the
desired flipped behavior. The previous patch changed the behavior
after apply alternatives in a very slight manner (if feature flags
were not set, you'd get a different instruction).

I personally don't have strong feelings on this decision, but this
decision does need to be made for this patch series to move forward.

I'd also be curious to hear Sean's opinion on this since he was vocal
about this previously.

Thanks,
Steve

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-16  1:15   ` Steve Rutherford
@ 2021-09-20 16:07     ` Sean Christopherson
  2021-09-21  9:58       ` Ashish Kalra
  0 siblings, 1 reply; 23+ messages in thread
From: Sean Christopherson @ 2021-09-20 16:07 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Ashish Kalra, pbonzini, tglx, mingo, hpa, joro, bp,
	thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh, dovmurik,
	tobin, jejb, dgilbert

On Wed, Sep 15, 2021, Steve Rutherford wrote:
> Looking at these threads, this patch either:
> 1) Needs review/approval from a maintainer that is interested or
> 2) Should flip back to using alternative (as suggested by Sean). In
> particular: `ALTERNATIVE("vmmcall", "vmcall",
> ALT_NOT(X86_FEATURE_VMMCALL))`. My understanding is that the advantage
> of this is that (after calling apply alternatives) you get exactly the
> same behavior as before. But before apply alternatives, you get the
> desired flipped behavior. The previous patch changed the behavior
> after apply alternatives in a very slight manner (if feature flags
> were not set, you'd get a different instruction).
> 
> I personally don't have strong feelings on this decision, but this
> decision does need to be made for this patch series to move forward.
> 
> I'd also be curious to hear Sean's opinion on this since he was vocal
> about this previously.

Pulling in Ashish's last email from the previous thread, which I failed to respond
to.

https://lore.kernel.org/all/20210820133223.GA28059@ashkalra_ubuntu_server/T/#u

On Fri, Aug 20, 2021, Ashish Kalra wrote:
> On Thu, Aug 19, 2021 at 11:15:26PM +0000, Sean Christopherson wrote:
> > On Thu, Aug 19, 2021, Kalra, Ashish wrote:
> > >
> > > > On Aug 20, 2021, at 3:38 AM, Kalra, Ashish <Ashish.Kalra@amd.com> wrote:
> > > > I think it makes more sense to stick to the original approach/patch, i.e.,
> > > > introducing a new private hypercall interface like kvm_sev_hypercall3() and
> > > > let early paravirtualized kernel code invoke this private hypercall
> > > > interface wherever required.
> >
> > I don't like the idea of duplicating code just because the problem is tricky to
> > solve.  Right now it's just one function, but it could balloon to multiple in
> > the future.  Plus there's always the possibility of a new, pre-alternatives
> > kvm_hypercall() being added in generic code, at which point using an SEV-specific
> > variant gets even uglier.

...

> Now, apply_alternatives() is called much later when setup_arch() calls
> check_bugs(), so we do need some kind of an early, pre-alternatives
> hypercall interface.
>
> Other cases of pre-alternatives hypercalls include marking per-cpu GHCB
> pages as decrypted on SEV-ES and per-cpu apf_reason, steal_time and
> kvm_apic_eoi as decrypted for SEV generally.
>
> Actually using this kvm_sev_hypercall3() function may be abstracted
> quite nicely. All these early hypercalls are made through
> early_set_memory_XX() interfaces, which in turn invoke pv_ops.
>
> Now, pv_ops can have this SEV/TDX specific abstractions.
>
> Currently, pv_ops.mmu.notify_page_enc_status_changed() callback is setup
> to kvm_sev_hypercall3() in case of SEV.
>
> Similarly, in case of TDX, pv_ops.mmu.notify_page_enc_status_changed() can
> be setup to a TDX specific callback.
>
> Therefore, this early_set_memory_XX() -> pv_ops.mmu.notify_page_enc_status_changed()
> is a generic interface and can easily have SEV, TDX and any other future platform
> specific abstractions added to it.

Unless there's some fundamental technical hurdle I'm overlooking, if pv_ops can
be configured early enough to handle this, then so can alternatives.  Adding
notify_page_enc_status_changed() may be necessary in the future, e.g. for TDX
or SNP, but IMO that is orthogonal to adding a generic, 100% redundant helper.

I appreciate that simply swapping the default from VMCALL->VMMCALL is a bit dirty
since it gives special meaning to the default value, but if that's the argument
against reusing kvm_hypercall3() then we should solve the early alternatives
problem, not fudge around it.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-20 16:07     ` Sean Christopherson
@ 2021-09-21  9:58       ` Ashish Kalra
  2021-09-21 13:50         ` Sean Christopherson
  0 siblings, 1 reply; 23+ messages in thread
From: Ashish Kalra @ 2021-09-21  9:58 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Steve Rutherford, pbonzini, tglx, mingo, hpa, joro, bp,
	thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh, dovmurik,
	tobin, jejb, dgilbert

Hello Sean, Steve,

On Mon, Sep 20, 2021 at 04:07:04PM +0000, Sean Christopherson wrote:
> On Wed, Sep 15, 2021, Steve Rutherford wrote:
> > Looking at these threads, this patch either:
> > 1) Needs review/approval from a maintainer that is interested or
> > 2) Should flip back to using alternative (as suggested by Sean). In
> > particular: `ALTERNATIVE("vmmcall", "vmcall",
> > ALT_NOT(X86_FEATURE_VMMCALL))`. My understanding is that the advantage
> > of this is that (after calling apply alternatives) you get exactly the
> > same behavior as before. But before apply alternatives, you get the
> > desired flipped behavior. The previous patch changed the behavior
> > after apply alternatives in a very slight manner (if feature flags
> > were not set, you'd get a different instruction).
> > 

This is simply a Hack, i don't think this is a good approach to take forward.

> > I personally don't have strong feelings on this decision, but this
> > decision does need to be made for this patch series to move forward.
> > 
> > I'd also be curious to hear Sean's opinion on this since he was vocal
> > about this previously.
> 
> Pulling in Ashish's last email from the previous thread, which I failed to respond
> to.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2F20210820133223.GA28059%40ashkalra_ubuntu_server%2FT%2F%23u&amp;data=04%7C01%7CAshish.Kalra%40amd.com%7C14e66eb4c505448175ae08d97c50b3c1%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637677508322702274%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=STJ6ze6iE7Uu7U3XPwWhMxwB%2BoYYcbZ7JcnIdlZ41rY%3D&amp;reserved=0
> 
> On Fri, Aug 20, 2021, Ashish Kalra wrote:
> > On Thu, Aug 19, 2021 at 11:15:26PM +0000, Sean Christopherson wrote:
> > > On Thu, Aug 19, 2021, Kalra, Ashish wrote:
> > > >
> > > > > On Aug 20, 2021, at 3:38 AM, Kalra, Ashish <Ashish.Kalra@amd.com> wrote:
> > > > > I think it makes more sense to stick to the original approach/patch, i.e.,
> > > > > introducing a new private hypercall interface like kvm_sev_hypercall3() and
> > > > > let early paravirtualized kernel code invoke this private hypercall
> > > > > interface wherever required.
> > >
> > > I don't like the idea of duplicating code just because the problem is tricky to
> > > solve.  Right now it's just one function, but it could balloon to multiple in
> > > the future.  Plus there's always the possibility of a new, pre-alternatives
> > > kvm_hypercall() being added in generic code, at which point using an SEV-specific
> > > variant gets even uglier.
> 
> ...
> 
> > Now, apply_alternatives() is called much later when setup_arch() calls
> > check_bugs(), so we do need some kind of an early, pre-alternatives
> > hypercall interface.
> >
> > Other cases of pre-alternatives hypercalls include marking per-cpu GHCB
> > pages as decrypted on SEV-ES and per-cpu apf_reason, steal_time and
> > kvm_apic_eoi as decrypted for SEV generally.
> >
> > Actually using this kvm_sev_hypercall3() function may be abstracted
> > quite nicely. All these early hypercalls are made through
> > early_set_memory_XX() interfaces, which in turn invoke pv_ops.
> >
> > Now, pv_ops can have this SEV/TDX specific abstractions.
> >
> > Currently, pv_ops.mmu.notify_page_enc_status_changed() callback is setup
> > to kvm_sev_hypercall3() in case of SEV.
> >
> > Similarly, in case of TDX, pv_ops.mmu.notify_page_enc_status_changed() can
> > be setup to a TDX specific callback.
> >
> > Therefore, this early_set_memory_XX() -> pv_ops.mmu.notify_page_enc_status_changed()
> > is a generic interface and can easily have SEV, TDX and any other future platform
> > specific abstractions added to it.
> 
> Unless there's some fundamental technical hurdle I'm overlooking, if pv_ops can
> be configured early enough to handle this, then so can alternatives.  
> 

Now, as i mentioned earlier, apply_alternatives() is only called boot
CPU identification has been done which is a lot of support code which
may be dependent on earlier setup_arch() code and then it does CPU
mitigtion selections before patching alternatives, again which may have
dependencies on previous code paths in setup_arch(), so i am not sure if
we can call apply_alternatives() earlier. 

Maybe for a guest kernel and virtualized boot enviroment, CPU
identification may not be as complicated as for a physical host, but
still it may have dependencies on earlier architecture specific boot
code.

> Adding notify_page_enc_status_changed() may be necessary in the future, e.g. for TDX
> or SNP, but IMO that is orthogonal to adding a generic, 100% redundant helper.

If we have to do this in the future and as Sean mentioned ealier that
vmcall needs to be fixed for TDX (as it will cause a #VE), then why not
add this abstraction right now ?

Thanks,
Ashish

> I appreciate that simply swapping the default from VMCALL->VMMCALL is a bit dirty
> since it gives special meaning to the default value, but if that's the argument
> against reusing kvm_hypercall3() then we should solve the early alternatives
> problem, not fudge around it.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-21  9:58       ` Ashish Kalra
@ 2021-09-21 13:50         ` Sean Christopherson
  2021-09-21 14:51           ` Borislav Petkov
  0 siblings, 1 reply; 23+ messages in thread
From: Sean Christopherson @ 2021-09-21 13:50 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Steve Rutherford, pbonzini, tglx, mingo, hpa, joro, bp,
	thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh, dovmurik,
	tobin, jejb, dgilbert

On Tue, Sep 21, 2021, Ashish Kalra wrote:
> This is simply a Hack, i don't think this is a good approach to take forward.

But a clever hack ;-)

> > Unless there's some fundamental technical hurdle I'm overlooking, if pv_ops can
> > be configured early enough to handle this, then so can alternatives.  
> 
> Now, as i mentioned earlier, apply_alternatives() is only called boot
> CPU identification has been done which is a lot of support code which
> may be dependent on earlier setup_arch() code and then it does CPU
> mitigtion selections before patching alternatives, again which may have
> dependencies on previous code paths in setup_arch(), so i am not sure if
> we can call apply_alternatives() earlier. 

apply_alternatives() is a generic helper that can work on any struct alt_instr
array, e.g. KVM_HYPERCALL can put its alternative into a different section that's
patched as soon as the VMM is identified.

> Maybe for a guest kernel and virtualized boot enviroment, CPU
> identification may not be as complicated as for a physical host, but
> still it may have dependencies on earlier architecture specific boot
> code.
> 
> > Adding notify_page_enc_status_changed() may be necessary in the future, e.g. for TDX
> > or SNP, but IMO that is orthogonal to adding a generic, 100% redundant helper.
> 
> If we have to do this in the future and as Sean mentioned ealier that
> vmcall needs to be fixed for TDX (as it will cause a #VE), then why not
> add this abstraction right now ?

I'm not objecting to adding a PV op, I'm objecting to kvm_sev_hypercall3().  If
others disagree and feel it's the way forward, I certainly won't stand in the way,
but IMO it's unnecessary code duplication.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-21 13:50         ` Sean Christopherson
@ 2021-09-21 14:51           ` Borislav Petkov
  2021-09-21 16:07             ` Sean Christopherson
  0 siblings, 1 reply; 23+ messages in thread
From: Borislav Petkov @ 2021-09-21 14:51 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Ashish Kalra, Steve Rutherford, pbonzini, tglx, mingo, hpa, joro,
	thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh, dovmurik,
	tobin, jejb, dgilbert

On Tue, Sep 21, 2021 at 01:50:09PM +0000, Sean Christopherson wrote:
> apply_alternatives() is a generic helper that can work on any struct alt_instr
> array, e.g. KVM_HYPERCALL can put its alternative into a different section that's
> patched as soon as the VMM is identified.

Where exactly in the boot process you wanna move it?

As Ashish says, you need the boot_cpu_data bits properly set before it
runs.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-21 14:51           ` Borislav Petkov
@ 2021-09-21 16:07             ` Sean Christopherson
  2021-09-22  9:38               ` Borislav Petkov
  0 siblings, 1 reply; 23+ messages in thread
From: Sean Christopherson @ 2021-09-21 16:07 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Ashish Kalra, Steve Rutherford, pbonzini, tglx, mingo, hpa, joro,
	thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh, dovmurik,
	tobin, jejb, dgilbert

On Tue, Sep 21, 2021, Borislav Petkov wrote:
> On Tue, Sep 21, 2021 at 01:50:09PM +0000, Sean Christopherson wrote:
> > apply_alternatives() is a generic helper that can work on any struct alt_instr
> > array, e.g. KVM_HYPERCALL can put its alternative into a different section that's
> > patched as soon as the VMM is identified.
> 
> Where exactly in the boot process you wanna move it?

init_hypervisor_platform(), after x86_init.hyper.init_platform() so that the
PV support can set the desired feature flags.  Since kvm_hypercall*() is only
used by KVM guests, set_cpu_cap(c, X86_FEATURE_VMMCALL) can be moved out of
early_init_amd/hygon() and into kvm_init_platform().

> As Ashish says, you need the boot_cpu_data bits properly set before it
> runs.

Another option would be to refactor apply_alternatives() to allow the caller to
provide a different feature check mechanism than boot_cpu_has(), which I think
would let us drop X86_FEATURE_VMMCALL, X86_FEATURE_VMCALL, and X86_FEATURE_VMW_VMMCALL
from cpufeatures.  That might get more than a bit gross though.

But like I said, if others think I'm over-engineering this...

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-21 16:07             ` Sean Christopherson
@ 2021-09-22  9:38               ` Borislav Petkov
  2021-09-22 12:10                 ` Ashish Kalra
  0 siblings, 1 reply; 23+ messages in thread
From: Borislav Petkov @ 2021-09-22  9:38 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Ashish Kalra, Steve Rutherford, pbonzini, tglx, mingo, hpa, joro,
	thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh, dovmurik,
	tobin, jejb, dgilbert

On Tue, Sep 21, 2021 at 04:07:03PM +0000, Sean Christopherson wrote:
> init_hypervisor_platform(), after x86_init.hyper.init_platform() so that the
> PV support can set the desired feature flags.  Since kvm_hypercall*() is only
> used by KVM guests, set_cpu_cap(c, X86_FEATURE_VMMCALL) can be moved out of
> early_init_amd/hygon() and into kvm_init_platform().

See below.

> Another option would be to refactor apply_alternatives() to allow
> the caller to provide a different feature check mechanism than
> boot_cpu_has(), which I think would let us drop X86_FEATURE_VMMCALL,
> X86_FEATURE_VMCALL, and X86_FEATURE_VMW_VMMCALL from cpufeatures. That
> might get more than a bit gross though.

Uuuf.

So here's what I'm seeing (line numbers given to show when stuff
happens):

start_kernel
|-> 953: setup_arch
    |-> 794: early_cpu_init
    |-> 936: init_hypervisor_platform
|
|-> 1134: check_bugs
	  |-> alternative_instructions

at line 794 setup_arch() calls early_cpu_init() which would end up
setting X86_FEATURE_VMMCALL on an AMD guest, based on CPUID information.

init_hypervisor_platform() happens after that.

The alternatives patching happens in check_bugs() at line 1134. Which
means, if one would consider moving the patching up, one would have
to audit all the code between line 953 and 1134, whether it does
set_cpu_cap() or some of the other helpers to set or clear bits in
boot_cpu_data which controls the patching.

So for that I have only one thing to say: can'o'worms. We tried to move
the memblock allocations placement in the boot process and generated at
least 4 regressions. I'm still testing the fix for the fix for the 4th
regression.

So moving stuff in the fragile boot process makes my hair stand up.

Refactoring apply_alternatives() to patch only for X86_FEATURE_VMMCALL
and then patch again, I dunno, this stuff is fragile and it might cause
some other similarly nasty fallout. And those are hard to debug because
one does not see immediately when boot_cpu_data features are missing and
functionality is behaving differently because of that.

So what's wrong with:

kvm_hypercall3:

	if (cpu_feature_enabled(X86_FEATURE_VMMCALL))
		return kvm_sev_hypercall3(...);

	/* rest of code */

?

Dunno we probably had that already in those old versions and maybe that
was shot down for another reason but it should get you what you want
without having to test the world and more for regressions possibly
happening from disturbing the house of cards called x86 boot order.

IMHO, I'd say.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-22  9:38               ` Borislav Petkov
@ 2021-09-22 12:10                 ` Ashish Kalra
  2021-09-22 13:54                   ` Borislav Petkov
  0 siblings, 1 reply; 23+ messages in thread
From: Ashish Kalra @ 2021-09-22 12:10 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Sean Christopherson, Steve Rutherford, pbonzini, tglx, mingo,
	hpa, joro, thomas.lendacky, x86, kvm, linux-kernel,
	brijesh.singh, dovmurik, tobin, jejb, dgilbert

Hello Boris,

On Wed, Sep 22, 2021 at 11:38:08AM +0200, Borislav Petkov wrote:
> On Tue, Sep 21, 2021 at 04:07:03PM +0000, Sean Christopherson wrote:
> > init_hypervisor_platform(), after x86_init.hyper.init_platform() so that the
> > PV support can set the desired feature flags.  Since kvm_hypercall*() is only
> > used by KVM guests, set_cpu_cap(c, X86_FEATURE_VMMCALL) can be moved out of
> > early_init_amd/hygon() and into kvm_init_platform().
> 
> See below.
> 
> > Another option would be to refactor apply_alternatives() to allow
> > the caller to provide a different feature check mechanism than
> > boot_cpu_has(), which I think would let us drop X86_FEATURE_VMMCALL,
> > X86_FEATURE_VMCALL, and X86_FEATURE_VMW_VMMCALL from cpufeatures. That
> > might get more than a bit gross though.
> 
> Uuuf.
> 
> So here's what I'm seeing (line numbers given to show when stuff
> happens):
> 
> start_kernel
> |-> 953: setup_arch
>     |-> 794: early_cpu_init
>     |-> 936: init_hypervisor_platform
> |
> |-> 1134: check_bugs
> 	  |-> alternative_instructions
> 
> at line 794 setup_arch() calls early_cpu_init() which would end up
> setting X86_FEATURE_VMMCALL on an AMD guest, based on CPUID information.
> 
> init_hypervisor_platform() happens after that.
> 
> The alternatives patching happens in check_bugs() at line 1134. Which
> means, if one would consider moving the patching up, one would have
> to audit all the code between line 953 and 1134, whether it does
> set_cpu_cap() or some of the other helpers to set or clear bits in
> boot_cpu_data which controls the patching.
> 
> So for that I have only one thing to say: can'o'worms. We tried to move
> the memblock allocations placement in the boot process and generated at
> least 4 regressions. I'm still testing the fix for the fix for the 4th
> regression.
> 
> So moving stuff in the fragile boot process makes my hair stand up.
> 
> Refactoring apply_alternatives() to patch only for X86_FEATURE_VMMCALL
> and then patch again, I dunno, this stuff is fragile and it might cause
> some other similarly nasty fallout. And those are hard to debug because
> one does not see immediately when boot_cpu_data features are missing and
> functionality is behaving differently because of that.
> 
> So what's wrong with:
> 
> kvm_hypercall3:
> 
> 	if (cpu_feature_enabled(X86_FEATURE_VMMCALL))
> 		return kvm_sev_hypercall3(...);
> 
> 	/* rest of code */
> 
> ?
> 
> Dunno we probably had that already in those old versions and maybe that
> was shot down for another reason but it should get you what you want
> without having to test the world and more for regressions possibly
> happening from disturbing the house of cards called x86 boot order.
> 
> IMHO, I'd say.
> 

Thanks for the above explanation.

If we have to do this:
 	if (cpu_feature_enabled(X86_FEATURE_VMMCALL))
 		return kvm_sev_hypercall3(...);

Then isn't it cleaner to simply do it via the paravirt_ops interface,
i.e, pv_ops.mmu.notify_page_enc_status_changed() where the callback
is only set when SEV and live migration feature are supported and
invoked through early_set_memory_decrypted()/encrypted().

Another memory encryption platform can set it's callback accordingly.

Thanks,
Ashish

> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cashish.kalra%40amd.com%7C02217ac26c444833d50208d97dacb5f0%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637679003031781718%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=1oXxchRABGifVoLwnXwQxQ7%2F%2FZpwGLqpGdma4Yz5sjw%3D&amp;reserved=0

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-22 12:10                 ` Ashish Kalra
@ 2021-09-22 13:54                   ` Borislav Petkov
  2021-09-28 19:05                     ` Steve Rutherford
  0 siblings, 1 reply; 23+ messages in thread
From: Borislav Petkov @ 2021-09-22 13:54 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: Sean Christopherson, Steve Rutherford, pbonzini, tglx, mingo,
	hpa, joro, thomas.lendacky, x86, kvm, linux-kernel,
	brijesh.singh, dovmurik, tobin, jejb, dgilbert

On Wed, Sep 22, 2021 at 12:10:08PM +0000, Ashish Kalra wrote:
> Then isn't it cleaner to simply do it via the paravirt_ops interface,
> i.e, pv_ops.mmu.notify_page_enc_status_changed() where the callback
> is only set when SEV and live migration feature are supported and
> invoked through early_set_memory_decrypted()/encrypted().
> 
> Another memory encryption platform can set it's callback accordingly.

Yeah, that sounds even cleaner to me.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-22 13:54                   ` Borislav Petkov
@ 2021-09-28 19:05                     ` Steve Rutherford
  2021-09-28 19:26                       ` Kalra, Ashish
  0 siblings, 1 reply; 23+ messages in thread
From: Steve Rutherford @ 2021-09-28 19:05 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Ashish Kalra, Sean Christopherson, pbonzini, tglx, mingo, hpa,
	joro, thomas.lendacky, x86, kvm, linux-kernel, brijesh.singh,
	dovmurik, tobin, jejb, dgilbert

On Wed, Sep 22, 2021 at 6:54 AM Borislav Petkov <bp@alien8.de> wrote:
>
> On Wed, Sep 22, 2021 at 12:10:08PM +0000, Ashish Kalra wrote:
> > Then isn't it cleaner to simply do it via the paravirt_ops interface,
> > i.e, pv_ops.mmu.notify_page_enc_status_changed() where the callback
> > is only set when SEV and live migration feature are supported and
> > invoked through early_set_memory_decrypted()/encrypted().
> >
> > Another memory encryption platform can set it's callback accordingly.
>
> Yeah, that sounds even cleaner to me.
If I'm not mistaken, this is what the patch set does now?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-28 19:05                     ` Steve Rutherford
@ 2021-09-28 19:26                       ` Kalra, Ashish
  2021-09-29 11:44                         ` Borislav Petkov
  0 siblings, 1 reply; 23+ messages in thread
From: Kalra, Ashish @ 2021-09-28 19:26 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Borislav Petkov, Sean Christopherson, pbonzini, tglx, mingo, hpa,
	joro, Lendacky, Thomas, x86, kvm, linux-kernel, Singh, Brijesh,
	dovmurik, tobin, jejb, dgilbert



> On Sep 28, 2021, at 2:05 PM, Steve Rutherford <srutherford@google.com> wrote:
> 
> On Wed, Sep 22, 2021 at 6:54 AM Borislav Petkov <bp@alien8.de> wrote:
>> 
>>> On Wed, Sep 22, 2021 at 12:10:08PM +0000, Ashish Kalra wrote:
>>> Then isn't it cleaner to simply do it via the paravirt_ops interface,
>>> i.e, pv_ops.mmu.notify_page_enc_status_changed() where the callback
>>> is only set when SEV and live migration feature are supported and
>>> invoked through early_set_memory_decrypted()/encrypted().
>>> 
>>> Another memory encryption platform can set it's callback accordingly.
>> 
>> Yeah, that sounds even cleaner to me.
> If I'm not mistaken, this is what the patch set does now?

Yes that’s what I mentioned to Boris.

Thanks,
Ashish

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-28 19:26                       ` Kalra, Ashish
@ 2021-09-29 11:44                         ` Borislav Petkov
  2021-10-26 20:48                           ` Ashish Kalra
  2021-11-10 19:38                           ` Steve Rutherford
  0 siblings, 2 replies; 23+ messages in thread
From: Borislav Petkov @ 2021-09-29 11:44 UTC (permalink / raw)
  To: Kalra, Ashish
  Cc: Steve Rutherford, Sean Christopherson, pbonzini, tglx, mingo,
	hpa, joro, Lendacky, Thomas, x86, kvm, linux-kernel, Singh,
	Brijesh, dovmurik, tobin, jejb, dgilbert

On Tue, Sep 28, 2021 at 07:26:32PM +0000, Kalra, Ashish wrote:
> Yes that’s what I mentioned to Boris.

Right, and as far as I'm concerned, the x86 bits look ok to me and I'm
fine with this going through the kvm tree.

There will be a conflict with this:

https://lkml.kernel.org/r/20210928191009.32551-1-bp@alien8.de

resulting in:

arch/x86/kernel/kvm.c: In function ‘setup_efi_kvm_sev_migration’:
arch/x86/kernel/kvm.c:563:7: error: implicit declaration of function ‘sev_active’; did you mean ‘cpu_active’? [-Werror=implicit-function-declaration]
  563 |  if (!sev_active() ||
      |       ^~~~~~~~~~
      |       cpu_active
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:277: arch/x86/kernel/kvm.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [scripts/Makefile.build:540: arch/x86/kernel] Error 2
make: *** [Makefile:1868: arch/x86] Error 2
make: *** Waiting for unfinished jobs....

but Paolo and I will figure out what to do - I'll likely have a separate
branch out which he can merge and that sev_active() will need to be
converted to

	if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))

which is trivial.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-29 11:44                         ` Borislav Petkov
@ 2021-10-26 20:48                           ` Ashish Kalra
  2021-11-10 19:38                           ` Steve Rutherford
  1 sibling, 0 replies; 23+ messages in thread
From: Ashish Kalra @ 2021-10-26 20:48 UTC (permalink / raw)
  To: Borislav Petkov, Kalra, Ashish
  Cc: Steve Rutherford, Sean Christopherson, pbonzini, tglx, mingo,
	hpa, joro, Lendacky, Thomas, x86, kvm, linux-kernel, Singh,
	Brijesh, dovmurik, tobin, jejb, dgilbert

Hello Paolo,

With reference to Boris's ack below, are you going to go ahead and queue 
this patch-set to kvm tree ?

Or do you want me to work on improving on or fixing anything on this 
patch-set?

Thanks,

Ashish

On 9/29/21 11:44 AM, Borislav Petkov wrote:
> On Tue, Sep 28, 2021 at 07:26:32PM +0000, Kalra, Ashish wrote:
>> Yes that’s what I mentioned to Boris.
> Right, and as far as I'm concerned, the x86 bits look ok to me and I'm
> fine with this going through the kvm tree.
>
> There will be a conflict with this:
>
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flkml.kernel.org%2Fr%2F20210928191009.32551-1-bp%40alien8.de&amp;data=04%7C01%7CAshish.Kalra%40amd.com%7Cbfa692635e9d4247a8f708d9833e8bcd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637685126945432007%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=vZU8ZijdqiSiVpCquIMqu2yz3Z7sWgD3vvNiiQyszzo%3D&amp;reserved=0
>
> resulting in:
>
> arch/x86/kernel/kvm.c: In function ‘setup_efi_kvm_sev_migration’:
> arch/x86/kernel/kvm.c:563:7: error: implicit declaration of function ‘sev_active’; did you mean ‘cpu_active’? [-Werror=implicit-function-declaration]
>    563 |  if (!sev_active() ||
>        |       ^~~~~~~~~~
>        |       cpu_active
> cc1: some warnings being treated as errors
> make[2]: *** [scripts/Makefile.build:277: arch/x86/kernel/kvm.o] Error 1
> make[2]: *** Waiting for unfinished jobs....
> make[1]: *** [scripts/Makefile.build:540: arch/x86/kernel] Error 2
> make: *** [Makefile:1868: arch/x86] Error 2
> make: *** Waiting for unfinished jobs....
>
> but Paolo and I will figure out what to do - I'll likely have a separate
> branch out which he can merge and that sev_active() will need to be
> converted to
>
> 	if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
>
> which is trivial.
>
> Thx.
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-09-29 11:44                         ` Borislav Petkov
  2021-10-26 20:48                           ` Ashish Kalra
@ 2021-11-10 19:38                           ` Steve Rutherford
  2021-11-10 22:11                             ` Paolo Bonzini
  1 sibling, 1 reply; 23+ messages in thread
From: Steve Rutherford @ 2021-11-10 19:38 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Kalra, Ashish, Sean Christopherson, pbonzini, tglx, mingo, hpa,
	joro, Lendacky, Thomas, x86, kvm, linux-kernel, Singh, Brijesh,
	dovmurik, tobin, jejb, dgilbert

On Wed, Sep 29, 2021 at 4:44 AM Borislav Petkov <bp@alien8.de> wrote:
>
> On Tue, Sep 28, 2021 at 07:26:32PM +0000, Kalra, Ashish wrote:
> > Yes that’s what I mentioned to Boris.
>
> Right, and as far as I'm concerned, the x86 bits look ok to me and I'm
> fine with this going through the kvm tree.
>
> There will be a conflict with this:
>
> https://lkml.kernel.org/r/20210928191009.32551-1-bp@alien8.de
>
> resulting in:
>
> arch/x86/kernel/kvm.c: In function ‘setup_efi_kvm_sev_migration’:
> arch/x86/kernel/kvm.c:563:7: error: implicit declaration of function ‘sev_active’; did you mean ‘cpu_active’? [-Werror=implicit-function-declaration]
>   563 |  if (!sev_active() ||
>       |       ^~~~~~~~~~
>       |       cpu_active
> cc1: some warnings being treated as errors
> make[2]: *** [scripts/Makefile.build:277: arch/x86/kernel/kvm.o] Error 1
> make[2]: *** Waiting for unfinished jobs....
> make[1]: *** [scripts/Makefile.build:540: arch/x86/kernel] Error 2
> make: *** [Makefile:1868: arch/x86] Error 2
> make: *** Waiting for unfinished jobs....
>
> but Paolo and I will figure out what to do - I'll likely have a separate
> branch out which he can merge and that sev_active() will need to be
> converted to
>
>         if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
>
> which is trivial.
>
> Thx.
>
> --
> Regards/Gruss,
>     Boris.
>
> https://people.kernel.org/tglx/notes-about-netiquette

Hey All,

Bumping this thread again, since I believe these patches are good to go.

Let me know if there is anything I can do to help here,
Thanks,
Steve

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-11-10 19:38                           ` Steve Rutherford
@ 2021-11-10 22:11                             ` Paolo Bonzini
  2021-11-10 22:42                               ` Borislav Petkov
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2021-11-10 22:11 UTC (permalink / raw)
  To: Steve Rutherford, Borislav Petkov
  Cc: Kalra, Ashish, Sean Christopherson, tglx, mingo, hpa, joro,
	Lendacky, Thomas, x86, kvm, linux-kernel, Singh, Brijesh,
	dovmurik, tobin, jejb, dgilbert

On 11/10/21 20:38, Steve Rutherford wrote:
> arch/x86/kernel/kvm.c: In function ‘setup_efi_kvm_sev_migration’:
> arch/x86/kernel/kvm.c:563:7: error: implicit declaration of function ‘sev_active’; did you mean ‘cpu_active’? [-Werror=implicit-function-declaration]
>    563 |  if (!sev_active() ||
>        |       ^~~~~~~~~~
>        |       cpu_active
> cc1: some warnings being treated as errors
> make[2]: *** [scripts/Makefile.build:277: arch/x86/kernel/kvm.o] Error 1
> make[2]: *** Waiting for unfinished jobs....
> make[1]: *** [scripts/Makefile.build:540: arch/x86/kernel] Error 2
> make: *** [Makefile:1868: arch/x86] Error 2
> make: *** Waiting for unfinished jobs....
> 
> but Paolo and I will figure out what to do - I'll likely have a separate
> branch out which he can merge and that sev_active() will need to be
> converted to
> 
>          if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))

Hi Boris,

can I just merge these patches via the KVM tree, since we're close to 
the end of the merge window and the cc_platform_has series has been merged?

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3
  2021-11-10 22:11                             ` Paolo Bonzini
@ 2021-11-10 22:42                               ` Borislav Petkov
  0 siblings, 0 replies; 23+ messages in thread
From: Borislav Petkov @ 2021-11-10 22:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Steve Rutherford, Kalra, Ashish, Sean Christopherson, tglx,
	mingo, hpa, joro, Lendacky, Thomas, x86, kvm, linux-kernel,
	Singh, Brijesh, dovmurik, tobin, jejb, dgilbert

On Wed, Nov 10, 2021 at 11:11:41PM +0100, Paolo Bonzini wrote:
> can I just merge these patches via the KVM tree, since we're close to the
> end of the merge window and the cc_platform_has series has been merged?

Yes, please. All the pending tip stuff from the last round is already
upstream.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration.
  2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
                   ` (4 preceding siblings ...)
  2021-08-24 11:07 ` [PATCH v6 5/5] x86/kvm: Add kexec support for SEV Live Migration Ashish Kalra
@ 2021-11-11 12:43 ` Paolo Bonzini
  5 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2021-11-11 12:43 UTC (permalink / raw)
  To: Ashish Kalra
  Cc: seanjc, tglx, mingo, hpa, joro, bp, Thomas.Lendacky, x86, kvm,
	linux-kernel, srutherford, brijesh.singh, dovmurik, tobin, jejb,
	dgilbert

On 8/24/21 13:03, Ashish Kalra wrote:
> adds guest api and guest kernel support for SEV live migration.
> 
> The patch series introduces a new hypercall. The guest OS can use this
> hypercall to notify the page encryption status. If the page is encrypted
> with guest specific-key then we use SEV command during the migration.
> If page is not encrypted then fallback to default. This new hypercall
> is invoked using paravirt_ops.
> 
> This section descibes how the SEV live migration feature is negotiated
> between the host and guest, the host indicates this feature support via
> KVM_FEATURE_CPUID. The guest firmware (OVMF) detects this feature and
> sets a UEFI enviroment variable indicating OVMF support for live
> migration, the guest kernel also detects the host support for this
> feature via cpuid and in case of an EFI boot verifies if OVMF also
> supports this feature by getting the UEFI enviroment variable and if it
> set then enables live migration feature on host by writing to a custom
> MSR, if not booted under EFI, then it simply enables the feature by
> again writing to the custom MSR.

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2021-11-11 12:43 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-24 11:03 [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Ashish Kalra
2021-08-24 11:04 ` [PATCH v6 1/5] x86/kvm: Add AMD SEV specific Hypercall3 Ashish Kalra
2021-09-16  1:15   ` Steve Rutherford
2021-09-20 16:07     ` Sean Christopherson
2021-09-21  9:58       ` Ashish Kalra
2021-09-21 13:50         ` Sean Christopherson
2021-09-21 14:51           ` Borislav Petkov
2021-09-21 16:07             ` Sean Christopherson
2021-09-22  9:38               ` Borislav Petkov
2021-09-22 12:10                 ` Ashish Kalra
2021-09-22 13:54                   ` Borislav Petkov
2021-09-28 19:05                     ` Steve Rutherford
2021-09-28 19:26                       ` Kalra, Ashish
2021-09-29 11:44                         ` Borislav Petkov
2021-10-26 20:48                           ` Ashish Kalra
2021-11-10 19:38                           ` Steve Rutherford
2021-11-10 22:11                             ` Paolo Bonzini
2021-11-10 22:42                               ` Borislav Petkov
2021-08-24 11:05 ` [PATCH v6 2/5] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2021-08-24 11:06 ` [PATCH v6 3/5] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
2021-08-24 11:07 ` [PATCH v6 4/5] x86/kvm: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
2021-08-24 11:07 ` [PATCH v6 5/5] x86/kvm: Add kexec support for SEV Live Migration Ashish Kalra
2021-11-11 12:43 ` [PATCH v6 0/5] Add Guest API & Guest Kernel support for SEV live migration Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).