linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/5] x86: Fix SEV guest regression
@ 2018-09-06 11:42 Brijesh Singh
  2018-09-06 11:42 ` [PATCH v5 1/5] x86/mm: Restructure sme_encrypt_kernel() Brijesh Singh
                   ` (4 more replies)
  0 siblings, 5 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 11:42 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Brijesh Singh, Tom Lendacky, Thomas Gleixner, Borislav Petkov,
	Paolo Bonzini, Sean Christopherson, Radim Krčmář

The following commit

"
x86/kvmclock: Remove memblock dependency

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=368a540e0232ad446931f5a4e8a5e06f69f21343
"

introduced SEV guest regression.

The guest physical address holding the wall_clock and hv_clock_boot
are shared with the hypervisor must be mapped with C=0 when SEV
is active. To clear the C-bit we use  kernel_physical_mapping_init() to
split the large pages. The above commit moved the kvmclock initialization
very early and kernel_physical_mapping_init() fails to allocate memory
while spliting the large page.

To solve it, we add a special .data..decrypted section, this section can be
used to hold the shared variables. Early boot code maps this section with
C=0. The section is pmd aligned and sized to avoid the need to split the pages.
Caller can use __decrypted attribute to add the variables in .data..decrypted
section. 

Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>

Changes since v4:
 - define few static pages in .data..decrypted which can be used
   for cpus > HVC_BOOT_ARRAY_SIZE when SEV is active.

Changes since v3:
 - commit message improvements (based on Sean's feedback)

Changes since v2:
 - commit message and code comment improvements (based on Boris feedback)
 - move sme_populate_pgd fixes in new patch.
 - drop stable Cc - will submit to stable after patch is upstreamed.

Changes since v1:
 - move the logic to re-arrange mapping in new patch
 - move the definition of __start_data_* in mem_encrypt.h
 - map the workarea buffer as encrypted when SEV is enabled
 - enhance the sme_populate_pgd to update the pte/pmd flags when mapping exist

Brijesh Singh (5):
  x86/mm: Restructure sme_encrypt_kernel()
  x86/mm: fix sme_populate_pgd() to update page flags
  x86/mm: add .data..decrypted section to hold shared variables
  x86/kvm: use __decrypted attribute in shared variables
  x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active

 arch/x86/include/asm/mem_encrypt.h |  10 ++
 arch/x86/kernel/head64.c           |  11 ++
 arch/x86/kernel/kvmclock.c         |  26 ++++-
 arch/x86/kernel/vmlinux.lds.S      |  20 ++++
 arch/x86/mm/init.c                 |   3 +
 arch/x86/mm/mem_encrypt.c          |  10 ++
 arch/x86/mm/mem_encrypt_identity.c | 232 +++++++++++++++++++++++++++----------
 7 files changed, 245 insertions(+), 67 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v5 1/5] x86/mm: Restructure sme_encrypt_kernel()
  2018-09-06 11:42 [PATCH v5 0/5] x86: Fix SEV guest regression Brijesh Singh
@ 2018-09-06 11:42 ` Brijesh Singh
  2018-09-06 11:42 ` [PATCH v5 2/5] x86/mm: fix sme_populate_pgd() to update page flags Brijesh Singh
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 11:42 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Brijesh Singh, Tom Lendacky, Thomas Gleixner, Borislav Petkov,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Radim Krčmář

Re-arrange the sme_encrypt_kernel() by moving the workarea map/unmap
logic in a separate static function. There are no logical changes in this
patch. The restructuring will allow us to expand the sme_encrypt_kernel
in future.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kvm@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: kvm@vger.kernel.org
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
---
 arch/x86/mm/mem_encrypt_identity.c | 160 ++++++++++++++++++++++++-------------
 1 file changed, 104 insertions(+), 56 deletions(-)

diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 7ae3686..92265d3 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -72,6 +72,22 @@ struct sme_populate_pgd_data {
 	unsigned long vaddr_end;
 };
 
+struct sme_workarea_data {
+	unsigned long kernel_start;
+	unsigned long kernel_end;
+	unsigned long kernel_len;
+
+	unsigned long initrd_start;
+	unsigned long initrd_end;
+	unsigned long initrd_len;
+
+	unsigned long workarea_start;
+	unsigned long workarea_end;
+	unsigned long workarea_len;
+
+	unsigned long decrypted_base;
+};
+
 static char sme_cmdline_arg[] __initdata = "mem_encrypt";
 static char sme_cmdline_on[]  __initdata = "on";
 static char sme_cmdline_off[] __initdata = "off";
@@ -266,19 +282,17 @@ static unsigned long __init sme_pgtable_calc(unsigned long len)
 	return entries + tables;
 }
 
-void __init sme_encrypt_kernel(struct boot_params *bp)
+static void __init build_workarea_map(struct boot_params *bp,
+				      struct sme_workarea_data *wa,
+				      struct sme_populate_pgd_data *ppd)
 {
 	unsigned long workarea_start, workarea_end, workarea_len;
 	unsigned long execute_start, execute_end, execute_len;
 	unsigned long kernel_start, kernel_end, kernel_len;
 	unsigned long initrd_start, initrd_end, initrd_len;
-	struct sme_populate_pgd_data ppd;
 	unsigned long pgtable_area_len;
 	unsigned long decrypted_base;
 
-	if (!sme_active())
-		return;
-
 	/*
 	 * Prepare for encrypting the kernel and initrd by building new
 	 * pagetables with the necessary attributes needed to encrypt the
@@ -358,17 +372,17 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 * pagetables and when the new encrypted and decrypted kernel
 	 * mappings are populated.
 	 */
-	ppd.pgtable_area = (void *)execute_end;
+	ppd->pgtable_area = (void *)execute_end;
 
 	/*
 	 * Make sure the current pagetable structure has entries for
 	 * addressing the workarea.
 	 */
-	ppd.pgd = (pgd_t *)native_read_cr3_pa();
-	ppd.paddr = workarea_start;
-	ppd.vaddr = workarea_start;
-	ppd.vaddr_end = workarea_end;
-	sme_map_range_decrypted(&ppd);
+	ppd->pgd = (pgd_t *)native_read_cr3_pa();
+	ppd->paddr = workarea_start;
+	ppd->vaddr = workarea_start;
+	ppd->vaddr_end = workarea_end;
+	sme_map_range_decrypted(ppd);
 
 	/* Flush the TLB - no globals so cr3 is enough */
 	native_write_cr3(__native_read_cr3());
@@ -379,9 +393,9 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 * then be populated with new PUDs and PMDs as the encrypted and
 	 * decrypted kernel mappings are created.
 	 */
-	ppd.pgd = ppd.pgtable_area;
-	memset(ppd.pgd, 0, sizeof(pgd_t) * PTRS_PER_PGD);
-	ppd.pgtable_area += sizeof(pgd_t) * PTRS_PER_PGD;
+	ppd->pgd = ppd->pgtable_area;
+	memset(ppd->pgd, 0, sizeof(pgd_t) * PTRS_PER_PGD);
+	ppd->pgtable_area += sizeof(pgd_t) * PTRS_PER_PGD;
 
 	/*
 	 * A different PGD index/entry must be used to get different
@@ -399,75 +413,109 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	decrypted_base <<= PGDIR_SHIFT;
 
 	/* Add encrypted kernel (identity) mappings */
-	ppd.paddr = kernel_start;
-	ppd.vaddr = kernel_start;
-	ppd.vaddr_end = kernel_end;
-	sme_map_range_encrypted(&ppd);
+	ppd->paddr = kernel_start;
+	ppd->vaddr = kernel_start;
+	ppd->vaddr_end = kernel_end;
+	sme_map_range_encrypted(ppd);
 
 	/* Add decrypted, write-protected kernel (non-identity) mappings */
-	ppd.paddr = kernel_start;
-	ppd.vaddr = kernel_start + decrypted_base;
-	ppd.vaddr_end = kernel_end + decrypted_base;
-	sme_map_range_decrypted_wp(&ppd);
+	ppd->paddr = kernel_start;
+	ppd->vaddr = kernel_start + decrypted_base;
+	ppd->vaddr_end = kernel_end + decrypted_base;
+	sme_map_range_decrypted_wp(ppd);
 
 	if (initrd_len) {
 		/* Add encrypted initrd (identity) mappings */
-		ppd.paddr = initrd_start;
-		ppd.vaddr = initrd_start;
-		ppd.vaddr_end = initrd_end;
-		sme_map_range_encrypted(&ppd);
+		ppd->paddr = initrd_start;
+		ppd->vaddr = initrd_start;
+		ppd->vaddr_end = initrd_end;
+		sme_map_range_encrypted(ppd);
 		/*
 		 * Add decrypted, write-protected initrd (non-identity) mappings
 		 */
-		ppd.paddr = initrd_start;
-		ppd.vaddr = initrd_start + decrypted_base;
-		ppd.vaddr_end = initrd_end + decrypted_base;
-		sme_map_range_decrypted_wp(&ppd);
+		ppd->paddr = initrd_start;
+		ppd->vaddr = initrd_start + decrypted_base;
+		ppd->vaddr_end = initrd_end + decrypted_base;
+		sme_map_range_decrypted_wp(ppd);
 	}
 
 	/* Add decrypted workarea mappings to both kernel mappings */
-	ppd.paddr = workarea_start;
-	ppd.vaddr = workarea_start;
-	ppd.vaddr_end = workarea_end;
-	sme_map_range_decrypted(&ppd);
+	ppd->paddr = workarea_start;
+	ppd->vaddr = workarea_start;
+	ppd->vaddr_end = workarea_end;
+	sme_map_range_decrypted(ppd);
 
-	ppd.paddr = workarea_start;
-	ppd.vaddr = workarea_start + decrypted_base;
-	ppd.vaddr_end = workarea_end + decrypted_base;
-	sme_map_range_decrypted(&ppd);
+	ppd->paddr = workarea_start;
+	ppd->vaddr = workarea_start + decrypted_base;
+	ppd->vaddr_end = workarea_end + decrypted_base;
+	sme_map_range_decrypted(ppd);
 
-	/* Perform the encryption */
-	sme_encrypt_execute(kernel_start, kernel_start + decrypted_base,
-			    kernel_len, workarea_start, (unsigned long)ppd.pgd);
+	wa->kernel_start = kernel_start;
+	wa->kernel_end = kernel_end;
+	wa->kernel_len = kernel_len;
 
-	if (initrd_len)
-		sme_encrypt_execute(initrd_start, initrd_start + decrypted_base,
-				    initrd_len, workarea_start,
-				    (unsigned long)ppd.pgd);
+	wa->initrd_start = initrd_start;
+	wa->initrd_end = initrd_end;
+	wa->initrd_len = initrd_len;
+
+	wa->workarea_start = workarea_start;
+	wa->workarea_end = workarea_end;
+	wa->workarea_len = workarea_len;
+
+	wa->decrypted_base = decrypted_base;
+}
 
+static void __init teardown_workarea_map(struct sme_workarea_data *wa,
+				         struct sme_populate_pgd_data *ppd)
+{
 	/*
 	 * At this point we are running encrypted.  Remove the mappings for
 	 * the decrypted areas - all that is needed for this is to remove
 	 * the PGD entry/entries.
 	 */
-	ppd.vaddr = kernel_start + decrypted_base;
-	ppd.vaddr_end = kernel_end + decrypted_base;
-	sme_clear_pgd(&ppd);
-
-	if (initrd_len) {
-		ppd.vaddr = initrd_start + decrypted_base;
-		ppd.vaddr_end = initrd_end + decrypted_base;
-		sme_clear_pgd(&ppd);
+	ppd->vaddr = wa->kernel_start + wa->decrypted_base;
+	ppd->vaddr_end = wa->kernel_end + wa->decrypted_base;
+	sme_clear_pgd(ppd);
+
+	if (wa->initrd_len) {
+		ppd->vaddr = wa->initrd_start + wa->decrypted_base;
+		ppd->vaddr_end = wa->initrd_end + wa->decrypted_base;
+		sme_clear_pgd(ppd);
 	}
 
-	ppd.vaddr = workarea_start + decrypted_base;
-	ppd.vaddr_end = workarea_end + decrypted_base;
-	sme_clear_pgd(&ppd);
+	ppd->vaddr = wa->workarea_start + wa->decrypted_base;
+	ppd->vaddr_end = wa->workarea_end + wa->decrypted_base;
+	sme_clear_pgd(ppd);
 
 	/* Flush the TLB - no globals so cr3 is enough */
 	native_write_cr3(__native_read_cr3());
 }
 
+void __init sme_encrypt_kernel(struct boot_params *bp)
+{
+	struct sme_populate_pgd_data ppd;
+	struct sme_workarea_data wa;
+
+	if (!sme_active())
+		return;
+
+	build_workarea_map(bp, &wa, &ppd);
+
+	/* When SEV is active, encrypt kernel and initrd */
+	sme_encrypt_execute(wa.kernel_start,
+			    wa.kernel_start + wa.decrypted_base,
+			    wa.kernel_len, wa.workarea_start,
+			    (unsigned long)ppd.pgd);
+
+	if (wa.initrd_len)
+		sme_encrypt_execute(wa.initrd_start,
+				    wa.initrd_start + wa.decrypted_base,
+				    wa.initrd_len, wa.workarea_start,
+				    (unsigned long)ppd.pgd);
+
+	teardown_workarea_map(&wa, &ppd);
+}
+
 void __init sme_enable(struct boot_params *bp)
 {
 	const char *cmdline_ptr, *cmdline_arg, *cmdline_on, *cmdline_off;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v5 2/5] x86/mm: fix sme_populate_pgd() to update page flags
  2018-09-06 11:42 [PATCH v5 0/5] x86: Fix SEV guest regression Brijesh Singh
  2018-09-06 11:42 ` [PATCH v5 1/5] x86/mm: Restructure sme_encrypt_kernel() Brijesh Singh
@ 2018-09-06 11:42 ` Brijesh Singh
  2018-09-06 11:43 ` [PATCH v5 3/5] x86/mm: add .data..decrypted section to hold shared variables Brijesh Singh
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 11:42 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Brijesh Singh, Tom Lendacky, Thomas Gleixner, Borislav Petkov,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Radim Krčmář

Fix sme_populate_pgd() to update page flags if the PMD/PTE entry
already exists.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kvm@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: kvm@vger.kernel.org
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
---
 arch/x86/mm/mem_encrypt_identity.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 92265d3..7659e65 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -154,9 +154,6 @@ static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd)
 		return;
 
 	pmd = pmd_offset(pud, ppd->vaddr);
-	if (pmd_large(*pmd))
-		return;
-
 	set_pmd(pmd, __pmd(ppd->paddr | ppd->pmd_flags));
 }
 
@@ -182,8 +179,7 @@ static void __init sme_populate_pgd(struct sme_populate_pgd_data *ppd)
 		return;
 
 	pte = pte_offset_map(pmd, ppd->vaddr);
-	if (pte_none(*pte))
-		set_pte(pte, __pte(ppd->paddr | ppd->pte_flags));
+	set_pte(pte, __pte(ppd->paddr | ppd->pte_flags));
 }
 
 static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v5 3/5] x86/mm: add .data..decrypted section to hold shared variables
  2018-09-06 11:42 [PATCH v5 0/5] x86: Fix SEV guest regression Brijesh Singh
  2018-09-06 11:42 ` [PATCH v5 1/5] x86/mm: Restructure sme_encrypt_kernel() Brijesh Singh
  2018-09-06 11:42 ` [PATCH v5 2/5] x86/mm: fix sme_populate_pgd() to update page flags Brijesh Singh
@ 2018-09-06 11:43 ` Brijesh Singh
  2018-09-06 11:43 ` [PATCH v5 4/5] x86/kvm: use __decrypted attribute in " Brijesh Singh
  2018-09-06 11:43 ` [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
  4 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 11:43 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Brijesh Singh, Tom Lendacky, Thomas Gleixner, Borislav Petkov,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Radim Krčmář

kvmclock defines few static variables which are shared with the
hypervisor during the kvmclock initialization.

When SEV is active, memory is encrypted with a guest-specific key, and
if guest OS wants to share the memory region with hypervisor then it must
clear the C-bit before sharing it. Currently, we use
kernel_physical_mapping_init() to split large pages before clearing the
C-bit on shared pages. But it fails when called from the kvmclock
initialization (mainly because memblock allocator is not ready that early
during boot).

Add a __decrypted section attribute which can be used when defining
such shared variable. The so-defined variables will be placed in the
.data..decrypted section. This section is mapped with C=0 early
during boot, we also ensure that the initialized values are updated
to match with C=0 (i.e perform an in-place decryption). The
.data..decrypted section is PMD-aligned and sized so that we avoid
the need to split the large pages when mapping the section.

The sme_encrypt_kernel() was used to perform the in-place encryption
of the Linux kernel and initrd when SME is active. The routine has been
enhanced to decrypt the .data..decrypted section for both SME and SEV
cases.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kvm@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: kvm@vger.kernel.org
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
---
 arch/x86/include/asm/mem_encrypt.h |  6 +++
 arch/x86/kernel/head64.c           | 11 +++++
 arch/x86/kernel/vmlinux.lds.S      | 17 +++++++
 arch/x86/mm/mem_encrypt_identity.c | 94 ++++++++++++++++++++++++++++++++------
 4 files changed, 113 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index c064383..802b2eb 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -52,6 +52,8 @@ void __init mem_encrypt_init(void);
 bool sme_active(void);
 bool sev_active(void);
 
+#define __decrypted __attribute__((__section__(".data..decrypted")))
+
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
 #define sme_me_mask	0ULL
@@ -77,6 +79,8 @@ early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0;
 static inline int __init
 early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
 
+#define __decrypted
+
 #endif	/* CONFIG_AMD_MEM_ENCRYPT */
 
 /*
@@ -88,6 +92,8 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
 #define __sme_pa(x)		(__pa(x) | sme_me_mask)
 #define __sme_pa_nodebug(x)	(__pa_nodebug(x) | sme_me_mask)
 
+extern char __start_data_decrypted[], __end_data_decrypted[];
+
 #endif	/* __ASSEMBLY__ */
 
 #endif	/* __X86_MEM_ENCRYPT_H__ */
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 8047379..af39d68 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -112,6 +112,7 @@ static bool __head check_la57_support(unsigned long physaddr)
 unsigned long __head __startup_64(unsigned long physaddr,
 				  struct boot_params *bp)
 {
+	unsigned long vaddr, vaddr_end;
 	unsigned long load_delta, *p;
 	unsigned long pgtable_flags;
 	pgdval_t *pgd;
@@ -234,6 +235,16 @@ unsigned long __head __startup_64(unsigned long physaddr,
 	/* Encrypt the kernel and related (if SME is active) */
 	sme_encrypt_kernel(bp);
 
+	/* Clear the memory encryption mask from the .data..decrypted section. */
+	if (mem_encrypt_active()) {
+		vaddr = (unsigned long)__start_data_decrypted;
+		vaddr_end = (unsigned long)__end_data_decrypted;
+		for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+			i = pmd_index(vaddr);
+			pmd[i] -= sme_get_me_mask();
+		}
+	}
+
 	/*
 	 * Return the SME encryption mask (if SME is active) to be used as a
 	 * modifier for the initial pgdir entry programmed into CR3.
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 8bde0a4..78d3169 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -89,6 +89,21 @@ PHDRS {
 	note PT_NOTE FLAGS(0);          /* ___ */
 }
 
+/*
+ * This section contains data which will be mapped as decrypted. Memory
+ * encryption operates on a page basis. Make this section PMD-aligned
+ * to avoid spliting the pages while mapping the section early.
+ *
+ * Note: We use a separate section so that only this section gets
+ * decrypted to avoid exposing more than we wish.
+ */
+#define DATA_DECRYPTED						\
+	. = ALIGN(PMD_SIZE);					\
+	__start_data_decrypted = .;				\
+	*(.data..decrypted);					\
+	. = ALIGN(PMD_SIZE);					\
+	__end_data_decrypted = .;				\
+
 SECTIONS
 {
 #ifdef CONFIG_X86_32
@@ -171,6 +186,8 @@ SECTIONS
 		/* rarely changed data like cpu maps */
 		READ_MOSTLY_DATA(INTERNODE_CACHE_BYTES)
 
+		DATA_DECRYPTED
+
 		/* End of data section */
 		_edata = .;
 	} :data
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 7659e65..08e70ba 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -51,6 +51,8 @@
 				 (_PAGE_PAT | _PAGE_PWT))
 
 #define PMD_FLAGS_ENC		(PMD_FLAGS_LARGE | _PAGE_ENC)
+#define PMD_FLAGS_ENC_WP	((PMD_FLAGS_ENC & ~_PAGE_CACHE_MASK) | \
+				 (_PAGE_PAT | _PAGE_PWT))
 
 #define PTE_FLAGS		(__PAGE_KERNEL_EXEC & ~_PAGE_GLOBAL)
 
@@ -59,6 +61,8 @@
 				 (_PAGE_PAT | _PAGE_PWT))
 
 #define PTE_FLAGS_ENC		(PTE_FLAGS | _PAGE_ENC)
+#define PTE_FLAGS_ENC_WP	((PTE_FLAGS_ENC & ~_PAGE_CACHE_MASK) | \
+				 (_PAGE_PAT | _PAGE_PWT))
 
 struct sme_populate_pgd_data {
 	void    *pgtable_area;
@@ -231,6 +235,11 @@ static void __init sme_map_range_encrypted(struct sme_populate_pgd_data *ppd)
 	__sme_map_range(ppd, PMD_FLAGS_ENC, PTE_FLAGS_ENC);
 }
 
+static void __init sme_map_range_encrypted_wp(struct sme_populate_pgd_data *ppd)
+{
+	__sme_map_range(ppd, PMD_FLAGS_ENC_WP, PTE_FLAGS_ENC_WP);
+}
+
 static void __init sme_map_range_decrypted(struct sme_populate_pgd_data *ppd)
 {
 	__sme_map_range(ppd, PMD_FLAGS_DEC, PTE_FLAGS_DEC);
@@ -378,7 +387,10 @@ static void __init build_workarea_map(struct boot_params *bp,
 	ppd->paddr = workarea_start;
 	ppd->vaddr = workarea_start;
 	ppd->vaddr_end = workarea_end;
-	sme_map_range_decrypted(ppd);
+	if (sev_active())
+		sme_map_range_encrypted(ppd);
+	else
+		sme_map_range_decrypted(ppd);
 
 	/* Flush the TLB - no globals so cr3 is enough */
 	native_write_cr3(__native_read_cr3());
@@ -435,16 +447,27 @@ static void __init build_workarea_map(struct boot_params *bp,
 		sme_map_range_decrypted_wp(ppd);
 	}
 
-	/* Add decrypted workarea mappings to both kernel mappings */
+	/*
+	 * When SEV is active, kernel is already encrypted hence mapping
+	 * the initial workarea_start as encrypted. When SME is active,
+	 * the kernel is not encrypted hence add decrypted workarea
+	 * mappings to both kernel mappings.
+	 */
 	ppd->paddr = workarea_start;
 	ppd->vaddr = workarea_start;
 	ppd->vaddr_end = workarea_end;
-	sme_map_range_decrypted(ppd);
+	if (sev_active())
+		sme_map_range_encrypted(ppd);
+	else
+		sme_map_range_decrypted(ppd);
 
 	ppd->paddr = workarea_start;
 	ppd->vaddr = workarea_start + decrypted_base;
 	ppd->vaddr_end = workarea_end + decrypted_base;
-	sme_map_range_decrypted(ppd);
+	if (sev_active())
+		sme_map_range_encrypted(ppd);
+	else
+		sme_map_range_decrypted(ppd);
 
 	wa->kernel_start = kernel_start;
 	wa->kernel_end = kernel_end;
@@ -487,28 +510,69 @@ static void __init teardown_workarea_map(struct sme_workarea_data *wa,
 	native_write_cr3(__native_read_cr3());
 }
 
+static void __init decrypt_shared_data(struct sme_workarea_data *wa,
+				       struct sme_populate_pgd_data *ppd)
+{
+	unsigned long decrypted_start, decrypted_end, decrypted_len;
+
+	/* Physical addresses of decrypted data section */
+	decrypted_start = __pa_symbol(__start_data_decrypted);
+	decrypted_end = ALIGN(__pa_symbol(__end_data_decrypted), PMD_PAGE_SIZE);
+	decrypted_len = decrypted_end - decrypted_start;
+
+	if (!decrypted_len)
+		return;
+
+	/* Add decrypted mapping for the section (identity) */
+	ppd->paddr = decrypted_start;
+	ppd->vaddr = decrypted_start;
+	ppd->vaddr_end = decrypted_end;
+	sme_map_range_decrypted(ppd);
+
+	/* Add encrypted-wp mapping for the section (non-identity) */
+	ppd->paddr = decrypted_start;
+	ppd->vaddr = decrypted_start + wa->decrypted_base;
+	ppd->vaddr_end = decrypted_end + wa->decrypted_base;
+	sme_map_range_encrypted_wp(ppd);
+
+	/* Perform in-place decryption */
+	sme_encrypt_execute(decrypted_start,
+			    decrypted_start + wa->decrypted_base,
+			    decrypted_len, wa->workarea_start,
+			    (unsigned long)ppd->pgd);
+
+	ppd->vaddr = decrypted_start + wa->decrypted_base;
+	ppd->vaddr_end = decrypted_end + wa->decrypted_base;
+	sme_clear_pgd(ppd);
+}
+
 void __init sme_encrypt_kernel(struct boot_params *bp)
 {
 	struct sme_populate_pgd_data ppd;
 	struct sme_workarea_data wa;
 
-	if (!sme_active())
+	if (!mem_encrypt_active())
 		return;
 
 	build_workarea_map(bp, &wa, &ppd);
 
-	/* When SEV is active, encrypt kernel and initrd */
-	sme_encrypt_execute(wa.kernel_start,
-			    wa.kernel_start + wa.decrypted_base,
-			    wa.kernel_len, wa.workarea_start,
-			    (unsigned long)ppd.pgd);
-
-	if (wa.initrd_len)
-		sme_encrypt_execute(wa.initrd_start,
-				    wa.initrd_start + wa.decrypted_base,
-				    wa.initrd_len, wa.workarea_start,
+	/* When SME is active, encrypt kernel and initrd */
+	if (sme_active()) {
+		sme_encrypt_execute(wa.kernel_start,
+				    wa.kernel_start + wa.decrypted_base,
+				    wa.kernel_len, wa.workarea_start,
 				    (unsigned long)ppd.pgd);
 
+		if (wa.initrd_len)
+			sme_encrypt_execute(wa.initrd_start,
+					    wa.initrd_start + wa.decrypted_base,
+					    wa.initrd_len, wa.workarea_start,
+					    (unsigned long)ppd.pgd);
+	}
+
+	/* Decrypt the contents of .data..decrypted section */
+	decrypt_shared_data(&wa, &ppd);
+
 	teardown_workarea_map(&wa, &ppd);
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v5 4/5] x86/kvm: use __decrypted attribute in shared variables
  2018-09-06 11:42 [PATCH v5 0/5] x86: Fix SEV guest regression Brijesh Singh
                   ` (2 preceding siblings ...)
  2018-09-06 11:43 ` [PATCH v5 3/5] x86/mm: add .data..decrypted section to hold shared variables Brijesh Singh
@ 2018-09-06 11:43 ` Brijesh Singh
  2018-09-06 11:43 ` [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
  4 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 11:43 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Brijesh Singh, Tom Lendacky, Thomas Gleixner, Borislav Petkov,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Radim Krčmář

Commit: 368a540e0232 (x86/kvmclock: Remove memblock dependency)
caused SEV guest regression. When SEV is active, we map the shared
variables (wall_clock and hv_clock_boot) with C=0 to ensure that both
the guest and the hypervisor are able to access the data. To map the
variables we use kernel_physical_mapping_init() to split the large pages,
but splitting large pages requires allocating a new PMD, which fails now
that kvmclock initialization is called early during boot.

Recently we added a special .data..decrypted section to hold the shared
variables. This section is mapped with C=0 early during boot. Use
__decrypted attribute to put the wall_clock and hv_clock_boot in
.data..decrypted section so that they are mapped with C=0.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Fixes: 368a540e0232 ("x86/kvmclock: Remove memblock dependency")
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kvm@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: kvm@vger.kernel.org
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
---
 arch/x86/kernel/kvmclock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 1e67646..376fd3a 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -61,8 +61,8 @@ early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
 	(PAGE_SIZE / sizeof(struct pvclock_vsyscall_time_info))
 
 static struct pvclock_vsyscall_time_info
-			hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __aligned(PAGE_SIZE);
-static struct pvclock_wall_clock wall_clock;
+			hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);
+static struct pvclock_wall_clock wall_clock __decrypted;
 static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
 
 static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 11:42 [PATCH v5 0/5] x86: Fix SEV guest regression Brijesh Singh
                   ` (3 preceding siblings ...)
  2018-09-06 11:43 ` [PATCH v5 4/5] x86/kvm: use __decrypted attribute in " Brijesh Singh
@ 2018-09-06 11:43 ` Brijesh Singh
  2018-09-06 12:24   ` Borislav Petkov
  2018-09-06 14:07   ` Sean Christopherson
  4 siblings, 2 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 11:43 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: Brijesh Singh, Tom Lendacky, Thomas Gleixner, Borislav Petkov,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Radim Krčmář

Currently, the per-cpu pvclock data is allocated dynamically when
cpu > HVC_BOOT_ARRAY_SIZE. The physical address of this variable is
shared between the guest and the hypervisor hence it must be mapped as
unencrypted (ie. C=0) when SEV is active.

When SEV is active, we will be wasting fairly sizeable amount of memory
since each CPU will be doing a separate 4k allocation so that it can clear
C-bit. Let's define few extra static page sized array of pvclock data.
In the preparatory stage of CPU hotplug, use the element of this static
array to avoid the dynamic allocation. This array will be put in
the .data..decrypted section so that its mapped with C=0 during the boot.

In non-SEV case, this static page will unused and free'd by the
free_decrypted_mem().

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kvm@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: kvm@vger.kernel.org
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
---
 arch/x86/include/asm/mem_encrypt.h |  4 ++++
 arch/x86/kernel/kvmclock.c         | 22 +++++++++++++++++++---
 arch/x86/kernel/vmlinux.lds.S      |  3 +++
 arch/x86/mm/init.c                 |  3 +++
 arch/x86/mm/mem_encrypt.c          | 10 ++++++++++
 5 files changed, 39 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 802b2eb..aa204af 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -48,11 +48,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
 
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void);
+void __init free_decrypted_mem(void);
 
 bool sme_active(void);
 bool sev_active(void);
 
 #define __decrypted __attribute__((__section__(".data..decrypted")))
+#define __decrypted_hvclock __attribute__((__section__(".data..decrypted_hvclock")))
 
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
@@ -80,6 +82,7 @@ static inline int __init
 early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
 
 #define __decrypted
+#define __decrypted_hvclock
 
 #endif	/* CONFIG_AMD_MEM_ENCRYPT */
 
@@ -93,6 +96,7 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
 #define __sme_pa_nodebug(x)	(__pa_nodebug(x) | sme_me_mask)
 
 extern char __start_data_decrypted[], __end_data_decrypted[];
+extern char __start_data_decrypted_hvclock[];
 
 #endif	/* __ASSEMBLY__ */
 
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 376fd3a..5b88773 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -65,6 +65,13 @@ static struct pvclock_vsyscall_time_info
 static struct pvclock_wall_clock wall_clock __decrypted;
 static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
 
+
+/* This should cover upto 512 VCPUS (first 64 are covered by hv_clock_boot[]). */
+#define HVC_DECRYPTED_ARRAY_SIZE \
+	((PAGE_SIZE * 7)  / sizeof(struct pvclock_vsyscall_time_info))
+static struct pvclock_vsyscall_time_info
+			hv_clock_dec[HVC_DECRYPTED_ARRAY_SIZE] __decrypted_hvclock;
+
 static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
 {
 	return &this_cpu_read(hv_clock_per_cpu)->pvti;
@@ -267,10 +274,19 @@ static int kvmclock_setup_percpu(unsigned int cpu)
 		return 0;
 
 	/* Use the static page for the first CPUs, allocate otherwise */
-	if (cpu < HVC_BOOT_ARRAY_SIZE)
+	if (cpu < HVC_BOOT_ARRAY_SIZE) {
 		p = &hv_clock_boot[cpu];
-	else
-		p = kzalloc(sizeof(*p), GFP_KERNEL);
+	} else {
+		/*
+		 * When SEV is active, use the static pages from
+		 * .data..decrypted_hvclock section. The pages are already
+		 * mapped with C=0.
+		 */
+		if (sev_active())
+			p = &hv_clock_dec[cpu - HVC_BOOT_ARRAY_SIZE];
+		else
+			p = kzalloc(sizeof(*p), GFP_KERNEL);
+	}
 
 	per_cpu(hv_clock_per_cpu, cpu) = p;
 	return p ? 0 : -ENOMEM;
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 78d3169..1aec291 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -101,6 +101,9 @@ PHDRS {
 	. = ALIGN(PMD_SIZE);					\
 	__start_data_decrypted = .;				\
 	*(.data..decrypted);					\
+	. = ALIGN(PAGE_SIZE);					\
+	__start_data_decrypted_hvclock = .;			\
+	*(.data..decrypted_hvclock);				\
 	. = ALIGN(PMD_SIZE);					\
 	__end_data_decrypted = .;				\
 
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 7a8fc26..052b279 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -815,9 +815,12 @@ void free_kernel_image_pages(void *begin, void *end)
 		set_memory_np_noalias(begin_ul, len_pages);
 }
 
+void __weak free_decrypted_mem(void) { }
+
 void __ref free_initmem(void)
 {
 	e820__reallocate_tables();
+	free_decrypted_mem();
 
 	free_kernel_image_pages(&__init_begin, &__init_end);
 }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index b2de398..865b1ad 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -348,6 +348,16 @@ bool sev_active(void)
 EXPORT_SYMBOL(sev_active);
 
 /* Architecture __weak replacement functions */
+void __init free_decrypted_mem(void)
+{
+	if (mem_encrypt_active())
+		return;
+
+	free_init_pages("unused decrypted",
+			(unsigned long)__start_data_decrypted_hvclock,
+			(unsigned long)__end_data_decrypted);
+}
+
 void __init mem_encrypt_init(void)
 {
 	if (!sme_me_mask)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 11:43 ` [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
@ 2018-09-06 12:24   ` Borislav Petkov
  2018-09-06 13:50     ` Sean Christopherson
  2018-09-06 14:07   ` Sean Christopherson
  1 sibling, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-09-06 12:24 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, Tom Lendacky, Thomas Gleixner,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Radim Krčmář

On Thu, Sep 06, 2018 at 06:43:02AM -0500, Brijesh Singh wrote:
> Currently, the per-cpu pvclock data is allocated dynamically when
> cpu > HVC_BOOT_ARRAY_SIZE. The physical address of this variable is
> shared between the guest and the hypervisor hence it must be mapped as
> unencrypted (ie. C=0) when SEV is active.
> 
> When SEV is active, we will be wasting fairly sizeable amount of memory
> since each CPU will be doing a separate 4k allocation so that it can clear
> C-bit. Let's define few extra static page sized array of pvclock data.
> In the preparatory stage of CPU hotplug, use the element of this static
> array to avoid the dynamic allocation. This array will be put in
> the .data..decrypted section so that its mapped with C=0 during the boot.
> 
> In non-SEV case, this static page will unused and free'd by the
> free_decrypted_mem().
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: kvm@vger.kernel.org
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: kvm@vger.kernel.org
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> ---
>  arch/x86/include/asm/mem_encrypt.h |  4 ++++
>  arch/x86/kernel/kvmclock.c         | 22 +++++++++++++++++++---
>  arch/x86/kernel/vmlinux.lds.S      |  3 +++
>  arch/x86/mm/init.c                 |  3 +++
>  arch/x86/mm/mem_encrypt.c          | 10 ++++++++++
>  5 files changed, 39 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 802b2eb..aa204af 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -48,11 +48,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
>  
>  /* Architecture __weak replacement functions */
>  void __init mem_encrypt_init(void);
> +void __init free_decrypted_mem(void);
>  
>  bool sme_active(void);
>  bool sev_active(void);
>  
>  #define __decrypted __attribute__((__section__(".data..decrypted")))
> +#define __decrypted_hvclock __attribute__((__section__(".data..decrypted_hvclock")))

So are we going to be defining a decrypted section for every piece of
machinery now?

That's a bit too much in my book.

Why can't you simply free everything in .data..decrypted on !SVE guests?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 12:24   ` Borislav Petkov
@ 2018-09-06 13:50     ` Sean Christopherson
  2018-09-06 14:18       ` Sean Christopherson
                         ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 13:50 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 02:24:23PM +0200, Borislav Petkov wrote:
> On Thu, Sep 06, 2018 at 06:43:02AM -0500, Brijesh Singh wrote:
> > Currently, the per-cpu pvclock data is allocated dynamically when
> > cpu > HVC_BOOT_ARRAY_SIZE. The physical address of this variable is
> > shared between the guest and the hypervisor hence it must be mapped as
> > unencrypted (ie. C=0) when SEV is active.
> > 
> > When SEV is active, we will be wasting fairly sizeable amount of memory
> > since each CPU will be doing a separate 4k allocation so that it can clear
> > C-bit. Let's define few extra static page sized array of pvclock data.
> > In the preparatory stage of CPU hotplug, use the element of this static
> > array to avoid the dynamic allocation. This array will be put in
> > the .data..decrypted section so that its mapped with C=0 during the boot.
> > 
> > In non-SEV case, this static page will unused and free'd by the
> > free_decrypted_mem().
> > 
> > Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> > Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > Cc: Tom Lendacky <thomas.lendacky@amd.com>
> > Cc: kvm@vger.kernel.org
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Borislav Petkov <bp@suse.de>
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: linux-kernel@vger.kernel.org
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> > Cc: kvm@vger.kernel.org
> > Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> > ---
> >  arch/x86/include/asm/mem_encrypt.h |  4 ++++
> >  arch/x86/kernel/kvmclock.c         | 22 +++++++++++++++++++---
> >  arch/x86/kernel/vmlinux.lds.S      |  3 +++
> >  arch/x86/mm/init.c                 |  3 +++
> >  arch/x86/mm/mem_encrypt.c          | 10 ++++++++++
> >  5 files changed, 39 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> > index 802b2eb..aa204af 100644
> > --- a/arch/x86/include/asm/mem_encrypt.h
> > +++ b/arch/x86/include/asm/mem_encrypt.h
> > @@ -48,11 +48,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
> >  
> >  /* Architecture __weak replacement functions */
> >  void __init mem_encrypt_init(void);
> > +void __init free_decrypted_mem(void);
> >  
> >  bool sme_active(void);
> >  bool sev_active(void);
> >  
> >  #define __decrypted __attribute__((__section__(".data..decrypted")))
> > +#define __decrypted_hvclock __attribute__((__section__(".data..decrypted_hvclock")))
> 
> So are we going to be defining a decrypted section for every piece of
> machinery now?
> 
> That's a bit too much in my book.
> 
> Why can't you simply free everything in .data..decrypted on !SVE guests?

That would prevent adding __decrypted to existing declarations, e.g.
hv_clock_boot, which would be ugly in its own right.  A more generic
solution would be to add something like __decrypted_exclusive to mark
data that is used if and only if SEV is active, and then free the
SEV-only data when SEV is disabled.

Originally, my thought was that this would be a one-off case and the
array could be freed directly in kvmclock_init(), e.g.:

static struct pvclock_vsyscall_time_info
	hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);

...

void __init kvmclock_init(void)
{
	u8 flags;

	if (!sev_active())
		free_init_pages("unused decrypted",
			(unsigned long)hv_clock_aux,
			(unsigned long)hv_clock_aux + sizeof(hv_clock_aux));

> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
> -- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 11:43 ` [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
  2018-09-06 12:24   ` Borislav Petkov
@ 2018-09-06 14:07   ` Sean Christopherson
  2018-09-06 18:50     ` Brijesh Singh
  1 sibling, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 14:07 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, Tom Lendacky, Thomas Gleixner,
	Borislav Petkov, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 06:43:02AM -0500, Brijesh Singh wrote:
> Currently, the per-cpu pvclock data is allocated dynamically when
> cpu > HVC_BOOT_ARRAY_SIZE. The physical address of this variable is
> shared between the guest and the hypervisor hence it must be mapped as
> unencrypted (ie. C=0) when SEV is active.
> 
> When SEV is active, we will be wasting fairly sizeable amount of memory
> since each CPU will be doing a separate 4k allocation so that it can clear
> C-bit. Let's define few extra static page sized array of pvclock data.
> In the preparatory stage of CPU hotplug, use the element of this static
> array to avoid the dynamic allocation. This array will be put in
> the .data..decrypted section so that its mapped with C=0 during the boot.
> 
> In non-SEV case, this static page will unused and free'd by the
> free_decrypted_mem().
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: kvm@vger.kernel.org
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: kvm@vger.kernel.org
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> ---
>  arch/x86/include/asm/mem_encrypt.h |  4 ++++
>  arch/x86/kernel/kvmclock.c         | 22 +++++++++++++++++++---
>  arch/x86/kernel/vmlinux.lds.S      |  3 +++
>  arch/x86/mm/init.c                 |  3 +++
>  arch/x86/mm/mem_encrypt.c          | 10 ++++++++++
>  5 files changed, 39 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 802b2eb..aa204af 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -48,11 +48,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
>  
>  /* Architecture __weak replacement functions */
>  void __init mem_encrypt_init(void);
> +void __init free_decrypted_mem(void);
>  
>  bool sme_active(void);
>  bool sev_active(void);
>  
>  #define __decrypted __attribute__((__section__(".data..decrypted")))
> +#define __decrypted_hvclock __attribute__((__section__(".data..decrypted_hvclock")))
>  
>  #else	/* !CONFIG_AMD_MEM_ENCRYPT */
>  
> @@ -80,6 +82,7 @@ static inline int __init
>  early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
>  
>  #define __decrypted
> +#define __decrypted_hvclock
>  
>  #endif	/* CONFIG_AMD_MEM_ENCRYPT */
>  
> @@ -93,6 +96,7 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
>  #define __sme_pa_nodebug(x)	(__pa_nodebug(x) | sme_me_mask)
>  
>  extern char __start_data_decrypted[], __end_data_decrypted[];
> +extern char __start_data_decrypted_hvclock[];
>  
>  #endif	/* __ASSEMBLY__ */
>  
> diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
> index 376fd3a..5b88773 100644
> --- a/arch/x86/kernel/kvmclock.c
> +++ b/arch/x86/kernel/kvmclock.c
> @@ -65,6 +65,13 @@ static struct pvclock_vsyscall_time_info
>  static struct pvclock_wall_clock wall_clock __decrypted;
>  static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
>  
> +
> +/* This should cover upto 512 VCPUS (first 64 are covered by hv_clock_boot[]). */
> +#define HVC_DECRYPTED_ARRAY_SIZE \
> +	((PAGE_SIZE * 7)  / sizeof(struct pvclock_vsyscall_time_info))

I think we can define the size relative to NR_CPUS rather than picking
an arbitrary number of pages, maybe with a BUILD_BUG_ON to make sure
the total size won't require a second 2mb page for __decrpyted.

#define HVC_DECRYPTED_ARRAY_SIZE  \
	PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
		   sizeof(struct pvclock_vsyscall_time_info))

> +static struct pvclock_vsyscall_time_info
> +			hv_clock_dec[HVC_DECRYPTED_ARRAY_SIZE] __decrypted_hvclock;
> +
>  static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
>  {
>  	return &this_cpu_read(hv_clock_per_cpu)->pvti;
> @@ -267,10 +274,19 @@ static int kvmclock_setup_percpu(unsigned int cpu)
>  		return 0;
>  
>  	/* Use the static page for the first CPUs, allocate otherwise */
> -	if (cpu < HVC_BOOT_ARRAY_SIZE)
> +	if (cpu < HVC_BOOT_ARRAY_SIZE) {
>  		p = &hv_clock_boot[cpu];
> -	else
> -		p = kzalloc(sizeof(*p), GFP_KERNEL);
> +	} else {
> +		/*
> +		 * When SEV is active, use the static pages from
> +		 * .data..decrypted_hvclock section. The pages are already
> +		 * mapped with C=0.
> +		 */
> +		if (sev_active())
> +			p = &hv_clock_dec[cpu - HVC_BOOT_ARRAY_SIZE];
> +		else
> +			p = kzalloc(sizeof(*p), GFP_KERNEL);
> +	}

Personal preference, but I think an if-elif-else with a single block
comment would be easier to read.

	/*
	 * Blah blah blah
	 */
	if (cpu < HVC_BOOT_ARRAY_SIZE)
		p = &hv_clock_boot[cpu];
	else if (sev_active())
		p = &hv_clock_dec[cpu - HVC_BOOT_ARRAY_SIZE];
	else
		p = kzalloc(sizeof(*p), GFP_KERNEL);

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 13:50     ` Sean Christopherson
@ 2018-09-06 14:18       ` Sean Christopherson
  2018-09-06 14:44         ` Borislav Petkov
  2018-09-06 18:37         ` Brijesh Singh
  2018-09-06 14:43       ` Borislav Petkov
  2018-09-06 17:50       ` Brijesh Singh
  2 siblings, 2 replies; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 14:18 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 06:50:41AM -0700, Sean Christopherson wrote:
> On Thu, Sep 06, 2018 at 02:24:23PM +0200, Borislav Petkov wrote:
> > On Thu, Sep 06, 2018 at 06:43:02AM -0500, Brijesh Singh wrote:
> > > Currently, the per-cpu pvclock data is allocated dynamically when
> > > cpu > HVC_BOOT_ARRAY_SIZE. The physical address of this variable is
> > > shared between the guest and the hypervisor hence it must be mapped as
> > > unencrypted (ie. C=0) when SEV is active.
> > > 
> > > When SEV is active, we will be wasting fairly sizeable amount of memory
> > > since each CPU will be doing a separate 4k allocation so that it can clear
> > > C-bit. Let's define few extra static page sized array of pvclock data.
> > > In the preparatory stage of CPU hotplug, use the element of this static
> > > array to avoid the dynamic allocation. This array will be put in
> > > the .data..decrypted section so that its mapped with C=0 during the boot.
> > > 
> > > In non-SEV case, this static page will unused and free'd by the
> > > free_decrypted_mem().
> > > 
> > > diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> > > index 802b2eb..aa204af 100644
> > > --- a/arch/x86/include/asm/mem_encrypt.h
> > > +++ b/arch/x86/include/asm/mem_encrypt.h
> > > @@ -48,11 +48,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
> > >  
> > >  /* Architecture __weak replacement functions */
> > >  void __init mem_encrypt_init(void);
> > > +void __init free_decrypted_mem(void);
> > >  
> > >  bool sme_active(void);
> > >  bool sev_active(void);
> > >  
> > >  #define __decrypted __attribute__((__section__(".data..decrypted")))
> > > +#define __decrypted_hvclock __attribute__((__section__(".data..decrypted_hvclock")))
> > 
> > So are we going to be defining a decrypted section for every piece of
> > machinery now?
> > 
> > That's a bit too much in my book.
> > 
> > Why can't you simply free everything in .data..decrypted on !SVE guests?
> 
> That would prevent adding __decrypted to existing declarations, e.g.
> hv_clock_boot, which would be ugly in its own right.  A more generic
> solution would be to add something like __decrypted_exclusive to mark
> data that is used if and only if SEV is active, and then free the
> SEV-only data when SEV is disabled.

Oh, and we'd need to make sure __decrypted_exclusive is freed when
!CONFIG_AMD_MEM_ENCRYPT, and preferably !sev_active() since the big
array is used only if SEV is active.  This patch unconditionally
defines hv_clock_dec but only frees it if CONFIG_AMD_MEM_ENCRYPT=y &&
!mem_encrypt_active().

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 13:50     ` Sean Christopherson
  2018-09-06 14:18       ` Sean Christopherson
@ 2018-09-06 14:43       ` Borislav Petkov
  2018-09-06 14:56         ` Sean Christopherson
  2018-09-06 17:50       ` Brijesh Singh
  2 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-09-06 14:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 06:50:41AM -0700, Sean Christopherson wrote:
> That would prevent adding __decrypted to existing declarations, e.g.
> hv_clock_boot, which would be ugly in its own right.  A more generic
> solution would be to add something like __decrypted_exclusive to mark

I still don't understand why can't there be only a single __decrypted
section and to free that whole section on !SEV.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 14:18       ` Sean Christopherson
@ 2018-09-06 14:44         ` Borislav Petkov
  2018-09-06 18:37         ` Brijesh Singh
  1 sibling, 0 replies; 30+ messages in thread
From: Borislav Petkov @ 2018-09-06 14:44 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 07:18:25AM -0700, Sean Christopherson wrote:
> Oh, and we'd need to make sure __decrypted_exclusive is freed when
> !CONFIG_AMD_MEM_ENCRYPT, and preferably !sev_active() since the big
> array is used only if SEV is active.  This patch unconditionally
> defines hv_clock_dec but only frees it if CONFIG_AMD_MEM_ENCRYPT=y &&
> !mem_encrypt_active().

We should not go nuts and complicate the code only to save us a couple
of KBs.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 14:43       ` Borislav Petkov
@ 2018-09-06 14:56         ` Sean Christopherson
  2018-09-06 15:19           ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 14:56 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 04:43:42PM +0200, Borislav Petkov wrote:
> On Thu, Sep 06, 2018 at 06:50:41AM -0700, Sean Christopherson wrote:
> > That would prevent adding __decrypted to existing declarations, e.g.
> > hv_clock_boot, which would be ugly in its own right.  A more generic
> > solution would be to add something like __decrypted_exclusive to mark
> 
> I still don't understand why can't there be only a single __decrypted
> section and to free that whole section on !SEV.

Wouldn't that result in @hv_clock_boot being incorrectly freed in the
!SEV case?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 14:56         ` Sean Christopherson
@ 2018-09-06 15:19           ` Borislav Petkov
  2018-09-06 15:54             ` Sean Christopherson
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-09-06 15:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 07:56:40AM -0700, Sean Christopherson wrote:
> Wouldn't that result in @hv_clock_boot being incorrectly freed in the
> !SEV case?

Ok, maybe I'm missing something but why do we need 4K per CPU? Why can't
we map all those pages which contain the clock variable, decrypted in
all guests' page tables?

Basically

(NR_CPUS * sizeof(struct pvclock_vsyscall_time_info)) / 4096

pages.

For the !SEV case then nothing changes.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 15:19           ` Borislav Petkov
@ 2018-09-06 15:54             ` Sean Christopherson
  2018-09-06 18:33               ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 15:54 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 05:19:38PM +0200, Borislav Petkov wrote:
> On Thu, Sep 06, 2018 at 07:56:40AM -0700, Sean Christopherson wrote:
> > Wouldn't that result in @hv_clock_boot being incorrectly freed in the
> > !SEV case?
> 
> Ok, maybe I'm missing something but why do we need 4K per CPU? Why can't
> we map all those pages which contain the clock variable, decrypted in
> all guests' page tables?
> 
> Basically
> 
> (NR_CPUS * sizeof(struct pvclock_vsyscall_time_info)) / 4096
> 
> pages.
> 
> For the !SEV case then nothing changes.

The 4k per CPU refers to the dynamic allocation in Brijesh's original
patch.   Currently, @hv_clock_boot is a single 4k page to limit the
amount of unused memory when 'nr_cpu_ids < NR_CPUS'.  In the SEV case,
dynamically allocating for 'cpu > HVC_BOOT_ARRAY_SIZE' one at a time
means that each CPU allocates a full 4k page to store a single 32-byte
variable.  My thought was that we could simply define a second array
for the SEV case to statically allocate for NR_CPUS since __decrypted
has a big chunk of memory that would be ununsed anyways[1].  And since
the second array is only used for SEV it can be freed if !SEV.

If we free the array explicitly then we don't need a second section or
attribute.  My comments about __decrypted_exclusive were that if we
did want to go with a second section/attribute, e.g. to have a generic
solution that can be used for other stuff, then we'd have more corner
cases to deal with.  I agree that simpler is better, i.e. I'd vote for
explicitly freeing the second array.  Apologies for not making that
clear from the get-go. 

[1] An alternative solution would be to batch the dynamic allocations,
    but that would probably require locking and be more complex.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 13:50     ` Sean Christopherson
  2018-09-06 14:18       ` Sean Christopherson
  2018-09-06 14:43       ` Borislav Petkov
@ 2018-09-06 17:50       ` Brijesh Singh
  2 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 17:50 UTC (permalink / raw)
  To: Sean Christopherson, Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 08:50 AM, Sean Christopherson wrote:
...

>>
>> So are we going to be defining a decrypted section for every piece of
>> machinery now?
>>
>> That's a bit too much in my book.
>>
>> Why can't you simply free everything in .data..decrypted on !SVE guests?
> 
> That would prevent adding __decrypted to existing declarations, e.g.
> hv_clock_boot, which would be ugly in its own right.  A more generic
> solution would be to add something like __decrypted_exclusive to mark
> data that is used if and only if SEV is active, and then free the
> SEV-only data when SEV is disabled.
> 
> Originally, my thought was that this would be a one-off case and the
> array could be freed directly in kvmclock_init(), e.g.:
> 


Please note that kvmclock_init() is called very early during the boot
process. We will not be able to use free_init_pages(...) so early.
Additionally, we also need to consider the bare-metal case -- which
will never call the kvmclock_init().



> static struct pvclock_vsyscall_time_info
> 	hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);
> 
> ...
> 
> void __init kvmclock_init(void)
> {
> 	u8 flags;
> 
> 	if (!sev_active())
> 		free_init_pages("unused decrypted",
> 			(unsigned long)hv_clock_aux,
> 			(unsigned long)hv_clock_aux + sizeof(hv_clock_aux));
> 
>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 15:54             ` Sean Christopherson
@ 2018-09-06 18:33               ` Borislav Petkov
  2018-09-06 18:43                 ` Brijesh Singh
  2018-09-06 18:45                 ` Sean Christopherson
  0 siblings, 2 replies; 30+ messages in thread
From: Borislav Petkov @ 2018-09-06 18:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 08:54:52AM -0700, Sean Christopherson wrote:
> My thought was that we could simply define a second array for the SEV
> case to statically allocate for NR_CPUS since __decrypted has a big
> chunk of memory that would be ununsed anyways[1]. And since the second
> array is only used for SEV it can be freed if !SEV.

Lemme see if I get it straight:

__decrypted:

 4K

__decrypted_XXX:

 ((num_possible_cpus() * 32) / 4K) pages

__decrypted_end:

Am I close?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 14:18       ` Sean Christopherson
  2018-09-06 14:44         ` Borislav Petkov
@ 2018-09-06 18:37         ` Brijesh Singh
  2018-09-06 18:47           ` Sean Christopherson
  1 sibling, 1 reply; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 18:37 UTC (permalink / raw)
  To: Sean Christopherson, Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 09:18 AM, Sean Christopherson wrote:
....

>>>
>>> So are we going to be defining a decrypted section for every piece of
>>> machinery now?
>>>
>>> That's a bit too much in my book.
>>>
>>> Why can't you simply free everything in .data..decrypted on !SVE guests?
>>
>> That would prevent adding __decrypted to existing declarations, e.g.
>> hv_clock_boot, which would be ugly in its own right.  A more generic
>> solution would be to add something like __decrypted_exclusive to mark
>> data that is used if and only if SEV is active, and then free the
>> SEV-only data when SEV is disabled.
> 
> Oh, and we'd need to make sure __decrypted_exclusive is freed when
> !CONFIG_AMD_MEM_ENCRYPT, and preferably !sev_active() since the big
> array is used only if SEV is active.  This patch unconditionally
> defines hv_clock_dec but only frees it if CONFIG_AMD_MEM_ENCRYPT=y &&
> !mem_encrypt_active().
> 

Again we have to consider the bare metal scenario while doing this. The
aux array you proposed will be added in decrypted section only when
CONFIG_AMD_MEM_ENCRYPT=y.  If CONFIG_AMD_MEM_ENCRYPT=n then nothng
gets put in .data.decrypted section. At the runtime, if memory
encryption is active then .data.decrypted_hvclock will contains useful
data.

The __decrypted attribute in "" when CONFIG_AMD_MEM_ENCRYPT=n.


-Brijesh

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 18:33               ` Borislav Petkov
@ 2018-09-06 18:43                 ` Brijesh Singh
  2018-09-06 18:45                 ` Sean Christopherson
  1 sibling, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 18:43 UTC (permalink / raw)
  To: Borislav Petkov, Sean Christopherson
  Cc: brijesh.singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 01:33 PM, Borislav Petkov wrote:
> On Thu, Sep 06, 2018 at 08:54:52AM -0700, Sean Christopherson wrote:
>> My thought was that we could simply define a second array for the SEV
>> case to statically allocate for NR_CPUS since __decrypted has a big
>> chunk of memory that would be ununsed anyways[1]. And since the second
>> array is only used for SEV it can be freed if !SEV.
> 
> Lemme see if I get it straight:
> 
> __decrypted:
> 
>   4K
> 
> __decrypted_XXX:
> 
>   ((num_possible_cpus() * 32) / 4K) pages
> 
> __decrypted_end:
> 
> Am I close?


Yes.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 18:33               ` Borislav Petkov
  2018-09-06 18:43                 ` Brijesh Singh
@ 2018-09-06 18:45                 ` Sean Christopherson
  2018-09-06 19:03                   ` Borislav Petkov
  1 sibling, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 18:45 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 08:33:34PM +0200, Borislav Petkov wrote:
> On Thu, Sep 06, 2018 at 08:54:52AM -0700, Sean Christopherson wrote:
> > My thought was that we could simply define a second array for the SEV
> > case to statically allocate for NR_CPUS since __decrypted has a big
> > chunk of memory that would be ununsed anyways[1]. And since the second
> > array is only used for SEV it can be freed if !SEV.
> 
> Lemme see if I get it straight:
> 
> __decrypted:
> 
>  4K
> 
> __decrypted_XXX:
> 
>  ((num_possible_cpus() * 32) / 4K) pages
> 
> __decrypted_end:
> 
> Am I close?

Yep, though because the 4k chunk in __decrypted is @hv_clock_boot 
that's used for cpus 0-127, __decrypted_XXX would effectively be:

   (((num_possible_cpus() * 32) / 4k) - 1) pages

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 18:37         ` Brijesh Singh
@ 2018-09-06 18:47           ` Sean Christopherson
  2018-09-06 19:24             ` Brijesh Singh
  0 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 18:47 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: Borislav Petkov, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 01:37:50PM -0500, Brijesh Singh wrote:
> 
> 
> On 09/06/2018 09:18 AM, Sean Christopherson wrote:
> ....
> 
> >>>
> >>>So are we going to be defining a decrypted section for every piece of
> >>>machinery now?
> >>>
> >>>That's a bit too much in my book.
> >>>
> >>>Why can't you simply free everything in .data..decrypted on !SVE guests?
> >>
> >>That would prevent adding __decrypted to existing declarations, e.g.
> >>hv_clock_boot, which would be ugly in its own right.  A more generic
> >>solution would be to add something like __decrypted_exclusive to mark
> >>data that is used if and only if SEV is active, and then free the
> >>SEV-only data when SEV is disabled.
> >
> >Oh, and we'd need to make sure __decrypted_exclusive is freed when
> >!CONFIG_AMD_MEM_ENCRYPT, and preferably !sev_active() since the big
> >array is used only if SEV is active.  This patch unconditionally
> >defines hv_clock_dec but only frees it if CONFIG_AMD_MEM_ENCRYPT=y &&
> >!mem_encrypt_active().
> >
> 
> Again we have to consider the bare metal scenario while doing this. The
> aux array you proposed will be added in decrypted section only when
> CONFIG_AMD_MEM_ENCRYPT=y.  If CONFIG_AMD_MEM_ENCRYPT=n then nothng
> gets put in .data.decrypted section. At the runtime, if memory
> encryption is active then .data.decrypted_hvclock will contains useful
> data.
> 
> The __decrypted attribute in "" when CONFIG_AMD_MEM_ENCRYPT=n.

Right, but won't the data get dumped into the regular .bss in that
case, i.e. needs to be freed?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 14:07   ` Sean Christopherson
@ 2018-09-06 18:50     ` Brijesh Singh
  2018-09-07  3:57       ` Brijesh Singh
  0 siblings, 1 reply; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 18:50 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: brijesh.singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 09:07 AM, Sean Christopherson wrote:
...

>> +
>> +/* This should cover upto 512 VCPUS (first 64 are covered by hv_clock_boot[]). */
>> +#define HVC_DECRYPTED_ARRAY_SIZE \
>> +	((PAGE_SIZE * 7)  / sizeof(struct pvclock_vsyscall_time_info))
> 
> I think we can define the size relative to NR_CPUS rather than picking
> an arbitrary number of pages, maybe with a BUILD_BUG_ON to make sure
> the total size won't require a second 2mb page for __decrpyted.
> 
> #define HVC_DECRYPTED_ARRAY_SIZE  \
> 	PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
> 		   sizeof(struct pvclock_vsyscall_time_info))
> 

Sure works for me.

>> +static struct pvclock_vsyscall_time_info
>> +			hv_clock_dec[HVC_DECRYPTED_ARRAY_SIZE] __decrypted_hvclock;
>> +
>>   static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
>>   {
>>   	return &this_cpu_read(hv_clock_per_cpu)->pvti;
>> @@ -267,10 +274,19 @@ static int kvmclock_setup_percpu(unsigned int cpu)
>>   		return 0;
>>   
>>   	/* Use the static page for the first CPUs, allocate otherwise */
>> -	if (cpu < HVC_BOOT_ARRAY_SIZE)
>> +	if (cpu < HVC_BOOT_ARRAY_SIZE) {
>>   		p = &hv_clock_boot[cpu];
>> -	else
>> -		p = kzalloc(sizeof(*p), GFP_KERNEL);
>> +	} else {
>> +		/*
>> +		 * When SEV is active, use the static pages from
>> +		 * .data..decrypted_hvclock section. The pages are already
>> +		 * mapped with C=0.
>> +		 */
>> +		if (sev_active())
>> +			p = &hv_clock_dec[cpu - HVC_BOOT_ARRAY_SIZE];
>> +		else
>> +			p = kzalloc(sizeof(*p), GFP_KERNEL);
>> +	}
> 
> Personal preference, but I think an if-elif-else with a single block
> comment would be easier to read.


I can do with that. thanks for your feedback.


> 
> 	/*
> 	 * Blah blah blah
> 	 */
> 	if (cpu < HVC_BOOT_ARRAY_SIZE)
> 		p = &hv_clock_boot[cpu];
> 	else if (sev_active())
> 		p = &hv_clock_dec[cpu - HVC_BOOT_ARRAY_SIZE];
> 	else
> 		p = kzalloc(sizeof(*p), GFP_KERNEL);
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 18:45                 ` Sean Christopherson
@ 2018-09-06 19:03                   ` Borislav Petkov
  0 siblings, 0 replies; 30+ messages in thread
From: Borislav Petkov @ 2018-09-06 19:03 UTC (permalink / raw)
  To: Sean Christopherson, Brijesh Singh
  Cc: x86, linux-kernel, kvm, Tom Lendacky, Thomas Gleixner,
	H. Peter Anvin, Paolo Bonzini, Radim Krčmář

On Thu, Sep 06, 2018 at 11:45:03AM -0700, Sean Christopherson wrote:
> Yep, though because the 4k chunk in __decrypted is @hv_clock_boot 
> that's used for cpus 0-127, __decrypted_XXX would effectively be:
> 
>    (((num_possible_cpus() * 32) / 4k) - 1) pages

Ok, sounds like a nice compromise to me.

Also, I wonder if using subsections would be even better when adding
other things to the decrypted section. I.e.,

.data..decrypted:

...

.data..decrypted.aux:

...

.data..decrypted.something_else:

and this way keep it still conceptually together by keeping the section
namespace clean because we're putting it all under .decrypted's
namespace.

Hmmm.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 18:47           ` Sean Christopherson
@ 2018-09-06 19:24             ` Brijesh Singh
  2018-09-06 19:46               ` Brijesh Singh
  2018-09-06 19:47               ` Sean Christopherson
  0 siblings, 2 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 19:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: brijesh.singh, Borislav Petkov, x86, linux-kernel, kvm,
	Tom Lendacky, Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 01:47 PM, Sean Christopherson wrote:
> On Thu, Sep 06, 2018 at 01:37:50PM -0500, Brijesh Singh wrote:
>>
>>
>> On 09/06/2018 09:18 AM, Sean Christopherson wrote:
>> ....
>>
>>>>>
>>>>> So are we going to be defining a decrypted section for every piece of
>>>>> machinery now?
>>>>>
>>>>> That's a bit too much in my book.
>>>>>
>>>>> Why can't you simply free everything in .data..decrypted on !SVE guests?
>>>>
>>>> That would prevent adding __decrypted to existing declarations, e.g.
>>>> hv_clock_boot, which would be ugly in its own right.  A more generic
>>>> solution would be to add something like __decrypted_exclusive to mark
>>>> data that is used if and only if SEV is active, and then free the
>>>> SEV-only data when SEV is disabled.
>>>
>>> Oh, and we'd need to make sure __decrypted_exclusive is freed when
>>> !CONFIG_AMD_MEM_ENCRYPT, and preferably !sev_active() since the big
>>> array is used only if SEV is active.  This patch unconditionally
>>> defines hv_clock_dec but only frees it if CONFIG_AMD_MEM_ENCRYPT=y &&
>>> !mem_encrypt_active().
>>>
>>
>> Again we have to consider the bare metal scenario while doing this. The
>> aux array you proposed will be added in decrypted section only when
>> CONFIG_AMD_MEM_ENCRYPT=y.  If CONFIG_AMD_MEM_ENCRYPT=n then nothng
>> gets put in .data.decrypted section. At the runtime, if memory
>> encryption is active then .data.decrypted_hvclock will contains useful
>> data.
>>
>> The __decrypted attribute in "" when CONFIG_AMD_MEM_ENCRYPT=n.
> 
> Right, but won't the data get dumped into the regular .bss in that
> case, i.e. needs to be freed?
> 


Yes, the auxiliary array will dumped into the regular .bss when
CONFIG_AMD_MEM_ENCRYPT=n. Typically it will be few k, I am not
sure if its worth complicating the code to save those extra memory.
Most of the distro's have CONFIG_AMD_MEM_ENCRYPT=y anyways.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 19:24             ` Brijesh Singh
@ 2018-09-06 19:46               ` Brijesh Singh
  2018-09-06 19:47               ` Sean Christopherson
  1 sibling, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 19:46 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: brijesh.singh, Borislav Petkov, x86, linux-kernel, kvm,
	Tom Lendacky, Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 02:24 PM, Brijesh Singh wrote:

...

>>>
>>> Again we have to consider the bare metal scenario while doing this. The
>>> aux array you proposed will be added in decrypted section only when
>>> CONFIG_AMD_MEM_ENCRYPT=y.  If CONFIG_AMD_MEM_ENCRYPT=n then nothng
>>> gets put in .data.decrypted section. At the runtime, if memory
>>> encryption is active then .data.decrypted_hvclock will contains useful
>>> data.
>>>
>>> The __decrypted attribute in "" when CONFIG_AMD_MEM_ENCRYPT=n.
>>
>> Right, but won't the data get dumped into the regular .bss in that
>> case, i.e. needs to be freed?
>>
> 
> 
> Yes, the auxiliary array will dumped into the regular .bss when
> CONFIG_AMD_MEM_ENCRYPT=n. Typically it will be few k, I am not
> sure if its worth complicating the code to save those extra memory.
> Most of the distro's have CONFIG_AMD_MEM_ENCRYPT=y anyways.

We can use #ifdef CONFIG_AMD_MEM_ENCRYPT around hv_clock_aux definition
so that it gets defined

Something like this:

#ifdef CONFIG_AMD_MEM_ENCRYPT
/* The auxilary array will be used when SEV is active */
#define HVC_AUX_ARRAY_SIZE \
         PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
                 sizeof(struct pvclock_vsyscall_time_info))
static struct pvclock_vsyscall_time_info
                         hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted_aux;
#endif


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 19:24             ` Brijesh Singh
  2018-09-06 19:46               ` Brijesh Singh
@ 2018-09-06 19:47               ` Sean Christopherson
  2018-09-06 20:20                 ` Brijesh Singh
  1 sibling, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 19:47 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: Borislav Petkov, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 02:24:32PM -0500, Brijesh Singh wrote:
> 
> 
> On 09/06/2018 01:47 PM, Sean Christopherson wrote:
> >On Thu, Sep 06, 2018 at 01:37:50PM -0500, Brijesh Singh wrote:
> >>
> >>
> >>On 09/06/2018 09:18 AM, Sean Christopherson wrote:
> >>....
> >>
> >>>>>
> >>>>>So are we going to be defining a decrypted section for every piece of
> >>>>>machinery now?
> >>>>>
> >>>>>That's a bit too much in my book.
> >>>>>
> >>>>>Why can't you simply free everything in .data..decrypted on !SVE guests?
> >>>>
> >>>>That would prevent adding __decrypted to existing declarations, e.g.
> >>>>hv_clock_boot, which would be ugly in its own right.  A more generic
> >>>>solution would be to add something like __decrypted_exclusive to mark
> >>>>data that is used if and only if SEV is active, and then free the
> >>>>SEV-only data when SEV is disabled.
> >>>
> >>>Oh, and we'd need to make sure __decrypted_exclusive is freed when
> >>>!CONFIG_AMD_MEM_ENCRYPT, and preferably !sev_active() since the big
> >>>array is used only if SEV is active.  This patch unconditionally
> >>>defines hv_clock_dec but only frees it if CONFIG_AMD_MEM_ENCRYPT=y &&
> >>>!mem_encrypt_active().
> >>>
> >>
> >>Again we have to consider the bare metal scenario while doing this. The
> >>aux array you proposed will be added in decrypted section only when
> >>CONFIG_AMD_MEM_ENCRYPT=y.  If CONFIG_AMD_MEM_ENCRYPT=n then nothng
> >>gets put in .data.decrypted section. At the runtime, if memory
> >>encryption is active then .data.decrypted_hvclock will contains useful
> >>data.
> >>
> >>The __decrypted attribute in "" when CONFIG_AMD_MEM_ENCRYPT=n.
> >
> >Right, but won't the data get dumped into the regular .bss in that
> >case, i.e. needs to be freed?
> >
> 
> 
> Yes, the auxiliary array will dumped into the regular .bss when
> CONFIG_AMD_MEM_ENCRYPT=n. Typically it will be few k, I am not
> sure if its worth complicating the code to save those extra memory.
> Most of the distro's have CONFIG_AMD_MEM_ENCRYPT=y anyways.

I just realized that we'll try to create a bogus array if 'NR_CPUS <=
HVC_BOOT_ARRAY_SIZE'.  A bit ugly, but we could #ifdef away both that
and CONFIG_AMD_MEM_ENCRYPT=n in a single shot, e.g.:

#if defined(CONFIG_AMD_MEM_ENCRYPT) && NR_CPUS > HVC_BOOT_ARRAY_SIZE
#define HVC_AUX_ARRAY_SIZE  \
	PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
		   sizeof(struct pvclock_vsyscall_time_info))
static struct pvclock_vsyscall_time_info
	hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);
#endif


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 19:47               ` Sean Christopherson
@ 2018-09-06 20:20                 ` Brijesh Singh
  2018-09-06 20:39                   ` Sean Christopherson
  0 siblings, 1 reply; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 20:20 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: brijesh.singh, Borislav Petkov, x86, linux-kernel, kvm,
	Tom Lendacky, Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 02:47 PM, Sean Christopherson wrote:
...

>>
>> Yes, the auxiliary array will dumped into the regular .bss when
>> CONFIG_AMD_MEM_ENCRYPT=n. Typically it will be few k, I am not
>> sure if its worth complicating the code to save those extra memory.
>> Most of the distro's have CONFIG_AMD_MEM_ENCRYPT=y anyways.
> 
> I just realized that we'll try to create a bogus array if 'NR_CPUS <=
> HVC_BOOT_ARRAY_SIZE'.  A bit ugly, but we could #ifdef away both that
> and CONFIG_AMD_MEM_ENCRYPT=n in a single shot, e.g.:
> 
> #if defined(CONFIG_AMD_MEM_ENCRYPT) && NR_CPUS > HVC_BOOT_ARRAY_SIZE
> #define HVC_AUX_ARRAY_SIZE  \
> 	PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
> 		   sizeof(struct pvclock_vsyscall_time_info))
> static struct pvclock_vsyscall_time_info
> 	hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);
> #endif
> 

The HVC_BOOT_ARRAY_SIZE macro uses sizeof(..) and to my understanding
the sizeof operators are not allowed in '#if'. Anyway, I will try to see
if it can be used, if not then I will stick to CONFIG_AMD_MEM_ENCRYPT
check.

-Brijesh

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 20:20                 ` Brijesh Singh
@ 2018-09-06 20:39                   ` Sean Christopherson
  2018-09-06 21:56                     ` Brijesh Singh
  0 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2018-09-06 20:39 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: Borislav Petkov, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář

On Thu, Sep 06, 2018 at 03:20:46PM -0500, Brijesh Singh wrote:
> 
> 
> On 09/06/2018 02:47 PM, Sean Christopherson wrote:
> ...
> 
> >>
> >>Yes, the auxiliary array will dumped into the regular .bss when
> >>CONFIG_AMD_MEM_ENCRYPT=n. Typically it will be few k, I am not
> >>sure if its worth complicating the code to save those extra memory.
> >>Most of the distro's have CONFIG_AMD_MEM_ENCRYPT=y anyways.
> >
> >I just realized that we'll try to create a bogus array if 'NR_CPUS <=
> >HVC_BOOT_ARRAY_SIZE'.  A bit ugly, but we could #ifdef away both that
> >and CONFIG_AMD_MEM_ENCRYPT=n in a single shot, e.g.:
> >
> >#if defined(CONFIG_AMD_MEM_ENCRYPT) && NR_CPUS > HVC_BOOT_ARRAY_SIZE
> >#define HVC_AUX_ARRAY_SIZE  \
> >	PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
> >		   sizeof(struct pvclock_vsyscall_time_info))
> >static struct pvclock_vsyscall_time_info
> >	hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);
> >#endif
> >
> 
> The HVC_BOOT_ARRAY_SIZE macro uses sizeof(..) and to my understanding
> the sizeof operators are not allowed in '#if'. Anyway, I will try to see
> if it can be used, if not then I will stick to CONFIG_AMD_MEM_ENCRYPT
> check.

Hmm, we'll need something otherwise 'NR_CPUS - HVC_BOOT_ARRAY_SIZE'
will wrap and cause build errors.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 20:39                   ` Sean Christopherson
@ 2018-09-06 21:56                     ` Brijesh Singh
  0 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-06 21:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: brijesh.singh, Borislav Petkov, x86, linux-kernel, kvm,
	Tom Lendacky, Thomas Gleixner, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 09/06/2018 03:39 PM, Sean Christopherson wrote:
> On Thu, Sep 06, 2018 at 03:20:46PM -0500, Brijesh Singh wrote:
>>
>>
>> On 09/06/2018 02:47 PM, Sean Christopherson wrote:
>> ...
>>
>>>>
>>>> Yes, the auxiliary array will dumped into the regular .bss when
>>>> CONFIG_AMD_MEM_ENCRYPT=n. Typically it will be few k, I am not
>>>> sure if its worth complicating the code to save those extra memory.
>>>> Most of the distro's have CONFIG_AMD_MEM_ENCRYPT=y anyways.
>>>
>>> I just realized that we'll try to create a bogus array if 'NR_CPUS <=
>>> HVC_BOOT_ARRAY_SIZE'.  A bit ugly, but we could #ifdef away both that
>>> and CONFIG_AMD_MEM_ENCRYPT=n in a single shot, e.g.:
>>>
>>> #if defined(CONFIG_AMD_MEM_ENCRYPT) && NR_CPUS > HVC_BOOT_ARRAY_SIZE
>>> #define HVC_AUX_ARRAY_SIZE  \
>>> 	PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
>>> 		   sizeof(struct pvclock_vsyscall_time_info))
>>> static struct pvclock_vsyscall_time_info
>>> 	hv_clock_aux[HVC_AUX_ARRAY_SIZE] __decrypted __aligned(PAGE_SIZE);
>>> #endif
>>>
>>
>> The HVC_BOOT_ARRAY_SIZE macro uses sizeof(..) and to my understanding
>> the sizeof operators are not allowed in '#if'. Anyway, I will try to see
>> if it can be used, if not then I will stick to CONFIG_AMD_MEM_ENCRYPT
>> check.
> 
> Hmm, we'll need something otherwise 'NR_CPUS - HVC_BOOT_ARRAY_SIZE'
> will wrap and cause build errors.
> 


Right.

One option is we can hard-code the check for > 64, something like this:

#if defined(CONFIG_AMD_MEM_ENCRYPT) && NR_CPUS > 64
...
...
#endif

But this assumption will break if we ever add a new field in
struct pvclock_vsyscall_time_info. Hence I am not in favor of this.

Second option is, use KVM_MAX_VCPUS or NR_CPUS, something like this:

#ifdef CONFIG_AMD_MEM_ENCRYPT
  #define HVC_AUX_ARRAY_SIZE \
          PAGE_ALIGN(NR_CPUS * sizeof(struct pvclock_vsyscall_time_info))
...
#endif

In this case we will allocate few extra bytes which will get freed for
the non-SEV case anyways.




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
  2018-09-06 18:50     ` Brijesh Singh
@ 2018-09-07  3:57       ` Brijesh Singh
  0 siblings, 0 replies; 30+ messages in thread
From: Brijesh Singh @ 2018-09-07  3:57 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: brijesh.singh, x86, linux-kernel, kvm, Tom Lendacky,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář



On 9/6/18 1:50 PM, Brijesh Singh wrote:
...

>>
>> #define HVC_DECRYPTED_ARRAY_SIZE  \
>>     PAGE_ALIGN((NR_CPUS - HVC_BOOT_ARRAY_SIZE) * \
>>            sizeof(struct pvclock_vsyscall_time_info))
>>
>

Since the hv_clock_aux array will have NR_CPUS elements hence the
following definition is all we need.

#ifdef CONFIG_AMD_MEM_ENCRYPT
static struct pvclock_vsyscall_time_info
                            hv_clock_aux[NR_CPUS] __decrypted_aux;
#endif


> Sure works for me.
>
>>> +static struct pvclock_vsyscall_time_info
>>> +            hv_clock_dec[HVC_DECRYPTED_ARRAY_SIZE]
>>> __decrypted_hvclock;
>>> +


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2018-09-07  3:57 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-06 11:42 [PATCH v5 0/5] x86: Fix SEV guest regression Brijesh Singh
2018-09-06 11:42 ` [PATCH v5 1/5] x86/mm: Restructure sme_encrypt_kernel() Brijesh Singh
2018-09-06 11:42 ` [PATCH v5 2/5] x86/mm: fix sme_populate_pgd() to update page flags Brijesh Singh
2018-09-06 11:43 ` [PATCH v5 3/5] x86/mm: add .data..decrypted section to hold shared variables Brijesh Singh
2018-09-06 11:43 ` [PATCH v5 4/5] x86/kvm: use __decrypted attribute in " Brijesh Singh
2018-09-06 11:43 ` [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
2018-09-06 12:24   ` Borislav Petkov
2018-09-06 13:50     ` Sean Christopherson
2018-09-06 14:18       ` Sean Christopherson
2018-09-06 14:44         ` Borislav Petkov
2018-09-06 18:37         ` Brijesh Singh
2018-09-06 18:47           ` Sean Christopherson
2018-09-06 19:24             ` Brijesh Singh
2018-09-06 19:46               ` Brijesh Singh
2018-09-06 19:47               ` Sean Christopherson
2018-09-06 20:20                 ` Brijesh Singh
2018-09-06 20:39                   ` Sean Christopherson
2018-09-06 21:56                     ` Brijesh Singh
2018-09-06 14:43       ` Borislav Petkov
2018-09-06 14:56         ` Sean Christopherson
2018-09-06 15:19           ` Borislav Petkov
2018-09-06 15:54             ` Sean Christopherson
2018-09-06 18:33               ` Borislav Petkov
2018-09-06 18:43                 ` Brijesh Singh
2018-09-06 18:45                 ` Sean Christopherson
2018-09-06 19:03                   ` Borislav Petkov
2018-09-06 17:50       ` Brijesh Singh
2018-09-06 14:07   ` Sean Christopherson
2018-09-06 18:50     ` Brijesh Singh
2018-09-07  3:57       ` Brijesh Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).