All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] x86/mm/encrypt: Simplify pgtable helpers
@ 2017-12-12 11:45 ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

This patchset simplifies sme_populate_pgd(), sme_populate_pgd_large() and
sme_pgtable_calc() functions.

As a side effect, the patchset makes encryption code ready to boot-time
switching between paging modes.

The patchset is build on top of Tom's "x86: SME: BSP/SME microcode update
fix" patchset.

It was only build-tested. Tom, could you please get it tested properly?

Kirill A. Shutemov (3):
  x86/mm/encrypt: Move sme_populate_pgd*() into separate translation
    unit
  x86/mm/encrypt: Rewrite sme_populate_pgd() and
    sme_populate_pgd_large()
  x86/mm/encrypt: Rewrite sme_pgtable_calc()

 arch/x86/mm/Makefile               |  13 +--
 arch/x86/mm/mem_encrypt.c          | 169 ++++---------------------------------
 arch/x86/mm/mem_encrypt_identity.c | 123 +++++++++++++++++++++++++++
 arch/x86/mm/mm_internal.h          |   4 +
 4 files changed, 150 insertions(+), 159 deletions(-)
 create mode 100644 arch/x86/mm/mem_encrypt_identity.c

-- 
2.15.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/3] x86/mm/encrypt: Simplify pgtable helpers
@ 2017-12-12 11:45 ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

This patchset simplifies sme_populate_pgd(), sme_populate_pgd_large() and
sme_pgtable_calc() functions.

As a side effect, the patchset makes encryption code ready to boot-time
switching between paging modes.

The patchset is build on top of Tom's "x86: SME: BSP/SME microcode update
fix" patchset.

It was only build-tested. Tom, could you please get it tested properly?

Kirill A. Shutemov (3):
  x86/mm/encrypt: Move sme_populate_pgd*() into separate translation
    unit
  x86/mm/encrypt: Rewrite sme_populate_pgd() and
    sme_populate_pgd_large()
  x86/mm/encrypt: Rewrite sme_pgtable_calc()

 arch/x86/mm/Makefile               |  13 +--
 arch/x86/mm/mem_encrypt.c          | 169 ++++---------------------------------
 arch/x86/mm/mem_encrypt_identity.c | 123 +++++++++++++++++++++++++++
 arch/x86/mm/mm_internal.h          |   4 +
 4 files changed, 150 insertions(+), 159 deletions(-)
 create mode 100644 arch/x86/mm/mem_encrypt_identity.c

-- 
2.15.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] x86/mm/encrypt: Move sme_populate_pgd*() into separate translation unit
  2017-12-12 11:45 ` Kirill A. Shutemov
@ 2017-12-12 11:45   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

sme_populate_pgd() and sme_populate_pgd_large() operate on the identity
mapping, which means they want virtual addresses to be equal to physical
one, without PAGE_OFFSET shift.

We also need to avoid paravirtualizaion call there.

Getting this done is tricky. We cannot use usual page table helpers.
It forces us to open-code a lot of things. It makes code ugly and hard
to modify.

We can get it work with the page table helpers, but it requires few
preprocessor tricks. These tricks may have side effects for the rest of
the file.

Let's isolate sme_populate_pgd() and sme_populate_pgd_large() into own
translation unit.

It's mostly copy-and-paste. The only change in logic is proper
pgtable_area propagation.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/Makefile               |  13 ++--
 arch/x86/mm/mem_encrypt.c          | 127 +--------------------------------
 arch/x86/mm/mem_encrypt_identity.c | 140 +++++++++++++++++++++++++++++++++++++
 arch/x86/mm/mm_internal.h          |   4 ++
 4 files changed, 155 insertions(+), 129 deletions(-)
 create mode 100644 arch/x86/mm/mem_encrypt_identity.c

diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 1b7fee6dafc4..9db870909b3d 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -1,12 +1,14 @@
 # SPDX-License-Identifier: GPL-2.0
-# Kernel does not boot with instrumentation of tlb.c and mem_encrypt.c
-KCOV_INSTRUMENT_tlb.o		:= n
-KCOV_INSTRUMENT_mem_encrypt.o	:= n
+# Kernel does not boot with instrumentation of tlb.c and mem_encrypt*.c
+KCOV_INSTRUMENT_tlb.o			:= n
+KCOV_INSTRUMENT_mem_encrypt.o		:= n
+KCOV_INSTRUMENT_mem_encrypt_identity.o	:= n
 
-KASAN_SANITIZE_mem_encrypt.o	:= n
+KASAN_SANITIZE_mem_encrypt.o		:= n
+KASAN_SANITIZE_mem_encrypt_identity.o	:= n
 
 ifdef CONFIG_FUNCTION_TRACER
-CFLAGS_REMOVE_mem_encrypt.o	= -pg
+CFLAGS_REMOVE_mem_encrypt_identity.o	= -pg
 endif
 
 obj-y	:=  init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
@@ -47,4 +49,5 @@ obj-$(CONFIG_RANDOMIZE_MEMORY) 			+= kaslr.o
 obj-$(CONFIG_PAGE_TABLE_ISOLATION)	+= pti.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt.o
+obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt_identity.o
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt_boot.o
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 60df2475ad46..f1f0a3fa7489 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -483,11 +483,6 @@ static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start,
 	memset(pgd_p, 0, pgd_size);
 }
 
-#define PGD_FLAGS	_KERNPG_TABLE_NOENC
-#define P4D_FLAGS	_KERNPG_TABLE_NOENC
-#define PUD_FLAGS	_KERNPG_TABLE_NOENC
-#define PMD_FLAGS	_KERNPG_TABLE_NOENC
-
 #define PMD_FLAGS_LARGE		(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL)
 
 #define PMD_FLAGS_DEC		PMD_FLAGS_LARGE
@@ -502,122 +497,6 @@ static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start,
 				 (_PAGE_PAT | _PAGE_PWT))
 #define PTE_FLAGS_ENC		(PTE_FLAGS | _PAGE_ENC)
 
-static pmd_t __init *sme_prepare_pgd(pgd_t *pgd_base, unsigned long vaddr)
-{
-	pgd_t *pgd_p;
-	p4d_t *p4d_p;
-	pud_t *pud_p;
-	pmd_t *pmd_p;
-
-	pgd_p = pgd_base + pgd_index(vaddr);
-	if (native_pgd_val(*pgd_p)) {
-		if (IS_ENABLED(CONFIG_X86_5LEVEL))
-			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-		else
-			pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pgd_t pgd;
-
-		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			p4d_p = pgtable_area;
-			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
-			pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
-
-			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
-		} else {
-			pud_p = pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
-		}
-		native_set_pgd(pgd_p, pgd);
-	}
-
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_p += p4d_index(vaddr);
-		if (native_p4d_val(*p4d_p)) {
-			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
-		} else {
-			p4d_t p4d;
-
-			pud_p = pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
-			native_set_p4d(p4d_p, p4d);
-		}
-	}
-
-	pud_p += pud_index(vaddr);
-	if (native_pud_val(*pud_p)) {
-		if (native_pud_val(*pud_p) & _PAGE_PSE)
-			return NULL;
-
-		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pud_t pud;
-
-		pmd_p = pgtable_area;
-		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
-		pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
-
-		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
-		native_set_pud(pud_p, pud);
-	}
-
-	return pmd_p;
-}
-
-static void __init sme_populate_pgd_large(pgd_t *pgd, unsigned long vaddr,
-					  unsigned long paddr,
-					  pmdval_t pmd_flags)
-{
-	pmd_t *pmd_p;
-
-	pmd_p = sme_prepare_pgd(pgd, vaddr);
-	if (!pmd_p)
-		return;
-
-	pmd_p += pmd_index(vaddr);
-	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
-		native_set_pmd(pmd_p, native_make_pmd(paddr | pmd_flags));
-}
-
-static void __init sme_populate_pgd(pgd_t *pgd, unsigned long vaddr,
-				    unsigned long paddr,
-				    pteval_t pte_flags)
-{
-	pmd_t *pmd_p;
-	pte_t *pte_p;
-
-	pmd_p = sme_prepare_pgd(pgd, vaddr);
-	if (!pmd_p)
-		return;
-
-	pmd_p += pmd_index(vaddr);
-	if (native_pmd_val(*pmd_p)) {
-		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
-			return;
-
-		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pmd_t pmd;
-
-		pte_p = pgtable_area;
-		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
-		pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
-
-		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
-		native_set_pmd(pmd_p, pmd);
-	}
-
-	pte_p += pte_index(vaddr);
-	if (!native_pte_val(*pte_p))
-		native_set_pte(pte_p, native_make_pte(paddr | pte_flags));
-}
-
 static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 				   unsigned long vaddr_end,
 				   unsigned long paddr,
@@ -628,7 +507,7 @@ static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 		unsigned long pmd_start = ALIGN(vaddr, PMD_PAGE_SIZE);
 
 		while (vaddr < pmd_start) {
-			sme_populate_pgd(pgd, vaddr, paddr, pte_flags);
+			pgtable_area = sme_populate_pgd(pgd, pgtable_area, vaddr, paddr, pte_flags);
 
 			vaddr += PAGE_SIZE;
 			paddr += PAGE_SIZE;
@@ -636,7 +515,7 @@ static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 	}
 
 	while (vaddr < (vaddr_end & PMD_PAGE_MASK)) {
-		sme_populate_pgd_large(pgd, vaddr, paddr, pmd_flags);
+		pgtable_area = sme_populate_pgd_large(pgd, pgtable_area, vaddr, paddr, pmd_flags);
 
 		vaddr += PMD_PAGE_SIZE;
 		paddr += PMD_PAGE_SIZE;
@@ -645,7 +524,7 @@ static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 	if (vaddr_end & ~PMD_PAGE_MASK) {
 		/* End is not 2MB aligned, create PTE entries */
 		while (vaddr < vaddr_end) {
-			sme_populate_pgd(pgd, vaddr, paddr, pte_flags);
+			pgtable_area = sme_populate_pgd(pgd, pgtable_area, vaddr, paddr, pte_flags);
 
 			vaddr += PAGE_SIZE;
 			paddr += PAGE_SIZE;
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
new file mode 100644
index 000000000000..8788b268a85d
--- /dev/null
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -0,0 +1,140 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+
+#define PGD_FLAGS	_KERNPG_TABLE_NOENC
+#define P4D_FLAGS	_KERNPG_TABLE_NOENC
+#define PUD_FLAGS	_KERNPG_TABLE_NOENC
+#define PMD_FLAGS	_KERNPG_TABLE_NOENC
+
+static pmd_t __init *sme_prepare_pgd(pgd_t *pgd_base, void **pgtable_area,
+		unsigned long vaddr)
+{
+	pgd_t *pgd_p;
+	p4d_t *p4d_p;
+	pud_t *pud_p;
+	pmd_t *pmd_p;
+
+	pgd_p = pgd_base + pgd_index(vaddr);
+	if (native_pgd_val(*pgd_p)) {
+		if (IS_ENABLED(CONFIG_X86_5LEVEL))
+			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
+		else
+			pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pgd_t pgd;
+
+		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
+			p4d_p = *pgtable_area;
+			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
+			*pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
+
+			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
+		} else {
+			pud_p = *pgtable_area;
+			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
+			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
+
+			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
+		}
+		native_set_pgd(pgd_p, pgd);
+	}
+
+	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
+		p4d_p += p4d_index(vaddr);
+		if (native_p4d_val(*p4d_p)) {
+			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
+		} else {
+			p4d_t p4d;
+
+			pud_p = *pgtable_area;
+			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
+			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
+
+			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
+			native_set_p4d(p4d_p, p4d);
+		}
+	}
+
+	pud_p += pud_index(vaddr);
+	if (native_pud_val(*pud_p)) {
+		if (native_pud_val(*pud_p) & _PAGE_PSE)
+			return NULL;
+
+		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pud_t pud;
+
+		pmd_p = *pgtable_area;
+		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
+		*pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
+
+		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
+		native_set_pud(pud_p, pud);
+	}
+
+	return pmd_p;
+}
+
+void __init *sme_populate_pgd_large(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pmdval_t pmd_flags)
+{
+	pmd_t *pmd_p;
+
+	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pmd_p)
+		return pgtable_area;
+
+	pmd_p += pmd_index(vaddr);
+	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
+		native_set_pmd(pmd_p, native_make_pmd(paddr | pmd_flags));
+
+	return pgtable_area;
+}
+
+void __init *sme_populate_pgd(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pteval_t pte_flags)
+{
+	pmd_t *pmd_p;
+	pte_t *pte_p;
+
+	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pmd_p)
+		return pgtable_area;
+
+	pmd_p += pmd_index(vaddr);
+	if (native_pmd_val(*pmd_p)) {
+		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
+			return pgtable_area;
+
+		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pmd_t pmd;
+
+		pte_p = pgtable_area;
+		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
+		pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
+
+		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
+		native_set_pmd(pmd_p, pmd);
+	}
+
+	pte_p += pte_index(vaddr);
+	if (!native_pte_val(*pte_p))
+		native_set_pte(pte_p, native_make_pte(paddr | pte_flags));
+
+	return pgtable_area;
+}
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 4e1f6e1b8159..309df9a2b4c7 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -19,4 +19,8 @@ extern int after_bootmem;
 
 void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache);
 
+void __init *sme_populate_pgd(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pteval_t pte_flags);
+void __init *sme_populate_pgd_large(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pmdval_t pmd_flags);
 #endif	/* __X86_MM_INTERNAL_H */
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 1/3] x86/mm/encrypt: Move sme_populate_pgd*() into separate translation unit
@ 2017-12-12 11:45   ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

sme_populate_pgd() and sme_populate_pgd_large() operate on the identity
mapping, which means they want virtual addresses to be equal to physical
one, without PAGE_OFFSET shift.

We also need to avoid paravirtualizaion call there.

Getting this done is tricky. We cannot use usual page table helpers.
It forces us to open-code a lot of things. It makes code ugly and hard
to modify.

We can get it work with the page table helpers, but it requires few
preprocessor tricks. These tricks may have side effects for the rest of
the file.

Let's isolate sme_populate_pgd() and sme_populate_pgd_large() into own
translation unit.

It's mostly copy-and-paste. The only change in logic is proper
pgtable_area propagation.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/Makefile               |  13 ++--
 arch/x86/mm/mem_encrypt.c          | 127 +--------------------------------
 arch/x86/mm/mem_encrypt_identity.c | 140 +++++++++++++++++++++++++++++++++++++
 arch/x86/mm/mm_internal.h          |   4 ++
 4 files changed, 155 insertions(+), 129 deletions(-)
 create mode 100644 arch/x86/mm/mem_encrypt_identity.c

diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 1b7fee6dafc4..9db870909b3d 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -1,12 +1,14 @@
 # SPDX-License-Identifier: GPL-2.0
-# Kernel does not boot with instrumentation of tlb.c and mem_encrypt.c
-KCOV_INSTRUMENT_tlb.o		:= n
-KCOV_INSTRUMENT_mem_encrypt.o	:= n
+# Kernel does not boot with instrumentation of tlb.c and mem_encrypt*.c
+KCOV_INSTRUMENT_tlb.o			:= n
+KCOV_INSTRUMENT_mem_encrypt.o		:= n
+KCOV_INSTRUMENT_mem_encrypt_identity.o	:= n
 
-KASAN_SANITIZE_mem_encrypt.o	:= n
+KASAN_SANITIZE_mem_encrypt.o		:= n
+KASAN_SANITIZE_mem_encrypt_identity.o	:= n
 
 ifdef CONFIG_FUNCTION_TRACER
-CFLAGS_REMOVE_mem_encrypt.o	= -pg
+CFLAGS_REMOVE_mem_encrypt_identity.o	= -pg
 endif
 
 obj-y	:=  init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
@@ -47,4 +49,5 @@ obj-$(CONFIG_RANDOMIZE_MEMORY) 			+= kaslr.o
 obj-$(CONFIG_PAGE_TABLE_ISOLATION)	+= pti.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt.o
+obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt_identity.o
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt_boot.o
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 60df2475ad46..f1f0a3fa7489 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -483,11 +483,6 @@ static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start,
 	memset(pgd_p, 0, pgd_size);
 }
 
-#define PGD_FLAGS	_KERNPG_TABLE_NOENC
-#define P4D_FLAGS	_KERNPG_TABLE_NOENC
-#define PUD_FLAGS	_KERNPG_TABLE_NOENC
-#define PMD_FLAGS	_KERNPG_TABLE_NOENC
-
 #define PMD_FLAGS_LARGE		(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL)
 
 #define PMD_FLAGS_DEC		PMD_FLAGS_LARGE
@@ -502,122 +497,6 @@ static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start,
 				 (_PAGE_PAT | _PAGE_PWT))
 #define PTE_FLAGS_ENC		(PTE_FLAGS | _PAGE_ENC)
 
-static pmd_t __init *sme_prepare_pgd(pgd_t *pgd_base, unsigned long vaddr)
-{
-	pgd_t *pgd_p;
-	p4d_t *p4d_p;
-	pud_t *pud_p;
-	pmd_t *pmd_p;
-
-	pgd_p = pgd_base + pgd_index(vaddr);
-	if (native_pgd_val(*pgd_p)) {
-		if (IS_ENABLED(CONFIG_X86_5LEVEL))
-			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-		else
-			pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pgd_t pgd;
-
-		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			p4d_p = pgtable_area;
-			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
-			pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
-
-			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
-		} else {
-			pud_p = pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
-		}
-		native_set_pgd(pgd_p, pgd);
-	}
-
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_p += p4d_index(vaddr);
-		if (native_p4d_val(*p4d_p)) {
-			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
-		} else {
-			p4d_t p4d;
-
-			pud_p = pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
-			native_set_p4d(p4d_p, p4d);
-		}
-	}
-
-	pud_p += pud_index(vaddr);
-	if (native_pud_val(*pud_p)) {
-		if (native_pud_val(*pud_p) & _PAGE_PSE)
-			return NULL;
-
-		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pud_t pud;
-
-		pmd_p = pgtable_area;
-		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
-		pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
-
-		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
-		native_set_pud(pud_p, pud);
-	}
-
-	return pmd_p;
-}
-
-static void __init sme_populate_pgd_large(pgd_t *pgd, unsigned long vaddr,
-					  unsigned long paddr,
-					  pmdval_t pmd_flags)
-{
-	pmd_t *pmd_p;
-
-	pmd_p = sme_prepare_pgd(pgd, vaddr);
-	if (!pmd_p)
-		return;
-
-	pmd_p += pmd_index(vaddr);
-	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
-		native_set_pmd(pmd_p, native_make_pmd(paddr | pmd_flags));
-}
-
-static void __init sme_populate_pgd(pgd_t *pgd, unsigned long vaddr,
-				    unsigned long paddr,
-				    pteval_t pte_flags)
-{
-	pmd_t *pmd_p;
-	pte_t *pte_p;
-
-	pmd_p = sme_prepare_pgd(pgd, vaddr);
-	if (!pmd_p)
-		return;
-
-	pmd_p += pmd_index(vaddr);
-	if (native_pmd_val(*pmd_p)) {
-		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
-			return;
-
-		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pmd_t pmd;
-
-		pte_p = pgtable_area;
-		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
-		pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
-
-		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
-		native_set_pmd(pmd_p, pmd);
-	}
-
-	pte_p += pte_index(vaddr);
-	if (!native_pte_val(*pte_p))
-		native_set_pte(pte_p, native_make_pte(paddr | pte_flags));
-}
-
 static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 				   unsigned long vaddr_end,
 				   unsigned long paddr,
@@ -628,7 +507,7 @@ static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 		unsigned long pmd_start = ALIGN(vaddr, PMD_PAGE_SIZE);
 
 		while (vaddr < pmd_start) {
-			sme_populate_pgd(pgd, vaddr, paddr, pte_flags);
+			pgtable_area = sme_populate_pgd(pgd, pgtable_area, vaddr, paddr, pte_flags);
 
 			vaddr += PAGE_SIZE;
 			paddr += PAGE_SIZE;
@@ -636,7 +515,7 @@ static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 	}
 
 	while (vaddr < (vaddr_end & PMD_PAGE_MASK)) {
-		sme_populate_pgd_large(pgd, vaddr, paddr, pmd_flags);
+		pgtable_area = sme_populate_pgd_large(pgd, pgtable_area, vaddr, paddr, pmd_flags);
 
 		vaddr += PMD_PAGE_SIZE;
 		paddr += PMD_PAGE_SIZE;
@@ -645,7 +524,7 @@ static void __init __sme_map_range(pgd_t *pgd, unsigned long vaddr,
 	if (vaddr_end & ~PMD_PAGE_MASK) {
 		/* End is not 2MB aligned, create PTE entries */
 		while (vaddr < vaddr_end) {
-			sme_populate_pgd(pgd, vaddr, paddr, pte_flags);
+			pgtable_area = sme_populate_pgd(pgd, pgtable_area, vaddr, paddr, pte_flags);
 
 			vaddr += PAGE_SIZE;
 			paddr += PAGE_SIZE;
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
new file mode 100644
index 000000000000..8788b268a85d
--- /dev/null
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -0,0 +1,140 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+
+#define PGD_FLAGS	_KERNPG_TABLE_NOENC
+#define P4D_FLAGS	_KERNPG_TABLE_NOENC
+#define PUD_FLAGS	_KERNPG_TABLE_NOENC
+#define PMD_FLAGS	_KERNPG_TABLE_NOENC
+
+static pmd_t __init *sme_prepare_pgd(pgd_t *pgd_base, void **pgtable_area,
+		unsigned long vaddr)
+{
+	pgd_t *pgd_p;
+	p4d_t *p4d_p;
+	pud_t *pud_p;
+	pmd_t *pmd_p;
+
+	pgd_p = pgd_base + pgd_index(vaddr);
+	if (native_pgd_val(*pgd_p)) {
+		if (IS_ENABLED(CONFIG_X86_5LEVEL))
+			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
+		else
+			pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pgd_t pgd;
+
+		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
+			p4d_p = *pgtable_area;
+			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
+			*pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
+
+			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
+		} else {
+			pud_p = *pgtable_area;
+			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
+			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
+
+			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
+		}
+		native_set_pgd(pgd_p, pgd);
+	}
+
+	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
+		p4d_p += p4d_index(vaddr);
+		if (native_p4d_val(*p4d_p)) {
+			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
+		} else {
+			p4d_t p4d;
+
+			pud_p = *pgtable_area;
+			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
+			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
+
+			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
+			native_set_p4d(p4d_p, p4d);
+		}
+	}
+
+	pud_p += pud_index(vaddr);
+	if (native_pud_val(*pud_p)) {
+		if (native_pud_val(*pud_p) & _PAGE_PSE)
+			return NULL;
+
+		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pud_t pud;
+
+		pmd_p = *pgtable_area;
+		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
+		*pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
+
+		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
+		native_set_pud(pud_p, pud);
+	}
+
+	return pmd_p;
+}
+
+void __init *sme_populate_pgd_large(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pmdval_t pmd_flags)
+{
+	pmd_t *pmd_p;
+
+	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pmd_p)
+		return pgtable_area;
+
+	pmd_p += pmd_index(vaddr);
+	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
+		native_set_pmd(pmd_p, native_make_pmd(paddr | pmd_flags));
+
+	return pgtable_area;
+}
+
+void __init *sme_populate_pgd(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pteval_t pte_flags)
+{
+	pmd_t *pmd_p;
+	pte_t *pte_p;
+
+	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pmd_p)
+		return pgtable_area;
+
+	pmd_p += pmd_index(vaddr);
+	if (native_pmd_val(*pmd_p)) {
+		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
+			return pgtable_area;
+
+		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
+	} else {
+		pmd_t pmd;
+
+		pte_p = pgtable_area;
+		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
+		pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
+
+		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
+		native_set_pmd(pmd_p, pmd);
+	}
+
+	pte_p += pte_index(vaddr);
+	if (!native_pte_val(*pte_p))
+		native_set_pte(pte_p, native_make_pte(paddr | pte_flags));
+
+	return pgtable_area;
+}
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 4e1f6e1b8159..309df9a2b4c7 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -19,4 +19,8 @@ extern int after_bootmem;
 
 void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache);
 
+void __init *sme_populate_pgd(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pteval_t pte_flags);
+void __init *sme_populate_pgd_large(pgd_t *pgd, void *pgtable_area,
+		unsigned long vaddr, unsigned long paddr, pmdval_t pmd_flags);
 #endif	/* __X86_MM_INTERNAL_H */
-- 
2.15.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] x86/mm/encrypt: Rewrite sme_populate_pgd() and sme_populate_pgd_large()
  2017-12-12 11:45 ` Kirill A. Shutemov
@ 2017-12-12 11:45   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

sme_populate_pgd() and sme_populate_pgd_large() operate on the identity
mapping, which means they want virtual addresses to be equal to physical
one, without PAGE_OFFSET shift.

We also need to avoid paravirtualizaion call there.

Getting this done is tricky. We cannot use usual page table helpers.
It forces us to open-code a lot of things. It makes code ugly and hard
to modify.

We can get it work with the page table helpers, but it requires few
preprocessor tricks.

  - Define __pa() and __va() to be compatible with identity mapping.

  - Undef CONFIG_PARAVIRT and CONFIG_PARAVIRT_SPINLOCKS before including
    any file. This way we can avoid pearavirtualization calls.

Now we can user normal page table helpers just fine.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/mem_encrypt_identity.c | 157 +++++++++++++++++--------------------
 1 file changed, 70 insertions(+), 87 deletions(-)

diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 8788b268a85d..35b2a8e4f8db 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -12,6 +12,23 @@
 
 #define DISABLE_BRANCH_PROFILING
 
+/*
+ * Since we're dealing with identity mappings, physical and virtual
+ * addresses are the same, so override these defines which are ultimately
+ * used by the headers in misc.h.
+ */
+#define __pa(x)  ((unsigned long)(x))
+#define __va(x)  ((void *)((unsigned long)(x)))
+
+/*
+ * Special hack: we have to be careful, because no indirections are
+ * allowed here, and paravirt_ops is a kind of one. As it will only run in
+ * baremetal anyway, we just keep it from happening. (This list needs to
+ * be extended when new paravirt and debugging variants are added.)
+ */
+#undef CONFIG_PARAVIRT
+#undef CONFIG_PARAVIRT_SPINLOCKS
+
 #include <linux/kernel.h>
 #include <linux/mm.h>
 
@@ -20,121 +37,87 @@
 #define PUD_FLAGS	_KERNPG_TABLE_NOENC
 #define PMD_FLAGS	_KERNPG_TABLE_NOENC
 
-static pmd_t __init *sme_prepare_pgd(pgd_t *pgd_base, void **pgtable_area,
+static pud_t __init *sme_prepare_pgd(pgd_t *pgd_base, void **pgtable_area,
 		unsigned long vaddr)
 {
-	pgd_t *pgd_p;
-	p4d_t *p4d_p;
-	pud_t *pud_p;
-	pmd_t *pmd_p;
-
-	pgd_p = pgd_base + pgd_index(vaddr);
-	if (native_pgd_val(*pgd_p)) {
-		if (IS_ENABLED(CONFIG_X86_5LEVEL))
-			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-		else
-			pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pgd_t pgd;
-
-		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			p4d_p = *pgtable_area;
-			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
-			*pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
-
-			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
-		} else {
-			pud_p = *pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
-		}
-		native_set_pgd(pgd_p, pgd);
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_base + pgd_index(vaddr);
+	if (pgd_none(*pgd)) {
+		p4d = *pgtable_area;
+		memset(p4d, 0, sizeof(*p4d) * PTRS_PER_P4D);
+		*pgtable_area += sizeof(*p4d) * PTRS_PER_P4D;
+		set_pgd(pgd, __pgd(PGD_FLAGS | __pa(p4d)));
 	}
 
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_p += p4d_index(vaddr);
-		if (native_p4d_val(*p4d_p)) {
-			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
-		} else {
-			p4d_t p4d;
-
-			pud_p = *pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
-			native_set_p4d(p4d_p, p4d);
-		}
+	p4d = p4d_offset(pgd, vaddr);
+	if (p4d_none(*p4d)) {
+		pud = *pgtable_area;
+		memset(pud, 0, sizeof(*pud) * PTRS_PER_PUD);
+		*pgtable_area += sizeof(*pud) * PTRS_PER_PUD;
+		set_p4d(p4d, __p4d(P4D_FLAGS | __pa(pud)));
 	}
 
-	pud_p += pud_index(vaddr);
-	if (native_pud_val(*pud_p)) {
-		if (native_pud_val(*pud_p) & _PAGE_PSE)
-			return NULL;
-
-		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pud_t pud;
-
-		pmd_p = *pgtable_area;
-		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
-		*pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
-
-		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
-		native_set_pud(pud_p, pud);
+	pud = pud_offset(p4d, vaddr);
+	if (pud_none(*pud)) {
+		pmd = *pgtable_area;
+		memset(pmd, 0, sizeof(*pmd) * PTRS_PER_PMD);
+		*pgtable_area += sizeof(*pmd) * PTRS_PER_PMD;
+		set_pud(pud, __pud(PUD_FLAGS | __pa(pmd)));
 	}
 
-	return pmd_p;
+	if (pud_large(*pud))
+		return NULL;
+
+	return pud;
 }
 
 void __init *sme_populate_pgd_large(pgd_t *pgd, void *pgtable_area,
 		unsigned long vaddr, unsigned long paddr, pmdval_t pmd_flags)
 {
-	pmd_t *pmd_p;
+	pud_t *pud;
+	pmd_t *pmd;
 
-	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
-	if (!pmd_p)
+	pud = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pud)
 		return pgtable_area;
 
-	pmd_p += pmd_index(vaddr);
-	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
-		native_set_pmd(pmd_p, native_make_pmd(paddr | pmd_flags));
+	pmd = pmd_offset(pud, vaddr);
+	if (pmd_large(*pmd))
+		return pgtable_area;
 
+	set_pmd(pmd, __pmd(paddr | pmd_flags));
 	return pgtable_area;
 }
 
 void __init *sme_populate_pgd(pgd_t *pgd, void *pgtable_area,
 		unsigned long vaddr, unsigned long paddr, pteval_t pte_flags)
 {
-	pmd_t *pmd_p;
-	pte_t *pte_p;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
 
-	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
-	if (!pmd_p)
+	pud = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pud)
 		return pgtable_area;
 
-	pmd_p += pmd_index(vaddr);
-	if (native_pmd_val(*pmd_p)) {
-		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
-			return pgtable_area;
-
-		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pmd_t pmd;
-
-		pte_p = pgtable_area;
-		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
-		pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
-
-		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
-		native_set_pmd(pmd_p, pmd);
+	pmd = pmd_offset(pud, vaddr);
+	if (pmd_none(*pmd)) {
+		pte = pgtable_area;
+		memset(pte, 0, sizeof(pte) * PTRS_PER_PTE);
+		pgtable_area += sizeof(pte) * PTRS_PER_PTE;
+		set_pmd(pmd, __pmd(PMD_FLAGS | __pa(pte)));
 	}
 
-	pte_p += pte_index(vaddr);
-	if (!native_pte_val(*pte_p))
-		native_set_pte(pte_p, native_make_pte(paddr | pte_flags));
+	if (pmd_large(*pmd))
+		return pgtable_area;
+
+	pte = pte_offset_map(pmd, vaddr);
+	if (pte_none(*pte))
+		set_pte(pte, __pte(paddr | pte_flags));
 
 	return pgtable_area;
 }
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] x86/mm/encrypt: Rewrite sme_populate_pgd() and sme_populate_pgd_large()
@ 2017-12-12 11:45   ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

sme_populate_pgd() and sme_populate_pgd_large() operate on the identity
mapping, which means they want virtual addresses to be equal to physical
one, without PAGE_OFFSET shift.

We also need to avoid paravirtualizaion call there.

Getting this done is tricky. We cannot use usual page table helpers.
It forces us to open-code a lot of things. It makes code ugly and hard
to modify.

We can get it work with the page table helpers, but it requires few
preprocessor tricks.

  - Define __pa() and __va() to be compatible with identity mapping.

  - Undef CONFIG_PARAVIRT and CONFIG_PARAVIRT_SPINLOCKS before including
    any file. This way we can avoid pearavirtualization calls.

Now we can user normal page table helpers just fine.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/mem_encrypt_identity.c | 157 +++++++++++++++++--------------------
 1 file changed, 70 insertions(+), 87 deletions(-)

diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 8788b268a85d..35b2a8e4f8db 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -12,6 +12,23 @@
 
 #define DISABLE_BRANCH_PROFILING
 
+/*
+ * Since we're dealing with identity mappings, physical and virtual
+ * addresses are the same, so override these defines which are ultimately
+ * used by the headers in misc.h.
+ */
+#define __pa(x)  ((unsigned long)(x))
+#define __va(x)  ((void *)((unsigned long)(x)))
+
+/*
+ * Special hack: we have to be careful, because no indirections are
+ * allowed here, and paravirt_ops is a kind of one. As it will only run in
+ * baremetal anyway, we just keep it from happening. (This list needs to
+ * be extended when new paravirt and debugging variants are added.)
+ */
+#undef CONFIG_PARAVIRT
+#undef CONFIG_PARAVIRT_SPINLOCKS
+
 #include <linux/kernel.h>
 #include <linux/mm.h>
 
@@ -20,121 +37,87 @@
 #define PUD_FLAGS	_KERNPG_TABLE_NOENC
 #define PMD_FLAGS	_KERNPG_TABLE_NOENC
 
-static pmd_t __init *sme_prepare_pgd(pgd_t *pgd_base, void **pgtable_area,
+static pud_t __init *sme_prepare_pgd(pgd_t *pgd_base, void **pgtable_area,
 		unsigned long vaddr)
 {
-	pgd_t *pgd_p;
-	p4d_t *p4d_p;
-	pud_t *pud_p;
-	pmd_t *pmd_p;
-
-	pgd_p = pgd_base + pgd_index(vaddr);
-	if (native_pgd_val(*pgd_p)) {
-		if (IS_ENABLED(CONFIG_X86_5LEVEL))
-			p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-		else
-			pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pgd_t pgd;
-
-		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			p4d_p = *pgtable_area;
-			memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
-			*pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D;
-
-			pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS);
-		} else {
-			pud_p = *pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS);
-		}
-		native_set_pgd(pgd_p, pgd);
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_base + pgd_index(vaddr);
+	if (pgd_none(*pgd)) {
+		p4d = *pgtable_area;
+		memset(p4d, 0, sizeof(*p4d) * PTRS_PER_P4D);
+		*pgtable_area += sizeof(*p4d) * PTRS_PER_P4D;
+		set_pgd(pgd, __pgd(PGD_FLAGS | __pa(p4d)));
 	}
 
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_p += p4d_index(vaddr);
-		if (native_p4d_val(*p4d_p)) {
-			pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK);
-		} else {
-			p4d_t p4d;
-
-			pud_p = *pgtable_area;
-			memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-			*pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD;
-
-			p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS);
-			native_set_p4d(p4d_p, p4d);
-		}
+	p4d = p4d_offset(pgd, vaddr);
+	if (p4d_none(*p4d)) {
+		pud = *pgtable_area;
+		memset(pud, 0, sizeof(*pud) * PTRS_PER_PUD);
+		*pgtable_area += sizeof(*pud) * PTRS_PER_PUD;
+		set_p4d(p4d, __p4d(P4D_FLAGS | __pa(pud)));
 	}
 
-	pud_p += pud_index(vaddr);
-	if (native_pud_val(*pud_p)) {
-		if (native_pud_val(*pud_p) & _PAGE_PSE)
-			return NULL;
-
-		pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pud_t pud;
-
-		pmd_p = *pgtable_area;
-		memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD);
-		*pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD;
-
-		pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS);
-		native_set_pud(pud_p, pud);
+	pud = pud_offset(p4d, vaddr);
+	if (pud_none(*pud)) {
+		pmd = *pgtable_area;
+		memset(pmd, 0, sizeof(*pmd) * PTRS_PER_PMD);
+		*pgtable_area += sizeof(*pmd) * PTRS_PER_PMD;
+		set_pud(pud, __pud(PUD_FLAGS | __pa(pmd)));
 	}
 
-	return pmd_p;
+	if (pud_large(*pud))
+		return NULL;
+
+	return pud;
 }
 
 void __init *sme_populate_pgd_large(pgd_t *pgd, void *pgtable_area,
 		unsigned long vaddr, unsigned long paddr, pmdval_t pmd_flags)
 {
-	pmd_t *pmd_p;
+	pud_t *pud;
+	pmd_t *pmd;
 
-	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
-	if (!pmd_p)
+	pud = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pud)
 		return pgtable_area;
 
-	pmd_p += pmd_index(vaddr);
-	if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE))
-		native_set_pmd(pmd_p, native_make_pmd(paddr | pmd_flags));
+	pmd = pmd_offset(pud, vaddr);
+	if (pmd_large(*pmd))
+		return pgtable_area;
 
+	set_pmd(pmd, __pmd(paddr | pmd_flags));
 	return pgtable_area;
 }
 
 void __init *sme_populate_pgd(pgd_t *pgd, void *pgtable_area,
 		unsigned long vaddr, unsigned long paddr, pteval_t pte_flags)
 {
-	pmd_t *pmd_p;
-	pte_t *pte_p;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
 
-	pmd_p = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
-	if (!pmd_p)
+	pud = sme_prepare_pgd(pgd, &pgtable_area, vaddr);
+	if (!pud)
 		return pgtable_area;
 
-	pmd_p += pmd_index(vaddr);
-	if (native_pmd_val(*pmd_p)) {
-		if (native_pmd_val(*pmd_p) & _PAGE_PSE)
-			return pgtable_area;
-
-		pte_p = (pte_t *)(native_pmd_val(*pmd_p) & ~PTE_FLAGS_MASK);
-	} else {
-		pmd_t pmd;
-
-		pte_p = pgtable_area;
-		memset(pte_p, 0, sizeof(*pte_p) * PTRS_PER_PTE);
-		pgtable_area += sizeof(*pte_p) * PTRS_PER_PTE;
-
-		pmd = native_make_pmd((pteval_t)pte_p + PMD_FLAGS);
-		native_set_pmd(pmd_p, pmd);
+	pmd = pmd_offset(pud, vaddr);
+	if (pmd_none(*pmd)) {
+		pte = pgtable_area;
+		memset(pte, 0, sizeof(pte) * PTRS_PER_PTE);
+		pgtable_area += sizeof(pte) * PTRS_PER_PTE;
+		set_pmd(pmd, __pmd(PMD_FLAGS | __pa(pte)));
 	}
 
-	pte_p += pte_index(vaddr);
-	if (!native_pte_val(*pte_p))
-		native_set_pte(pte_p, native_make_pte(paddr | pte_flags));
+	if (pmd_large(*pmd))
+		return pgtable_area;
+
+	pte = pte_offset_map(pmd, vaddr);
+	if (pte_none(*pte))
+		set_pte(pte, __pte(paddr | pte_flags));
 
 	return pgtable_area;
 }
-- 
2.15.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] x86/mm/encrypt: Rewrite sme_pgtable_calc()
  2017-12-12 11:45 ` Kirill A. Shutemov
@ 2017-12-12 11:45   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

sme_pgtable_calc() is unnecessary complex. It can be re-written in a
more stream-lined way.

As a side effect, we would get the code ready to boot-time switching
between paging modes.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/mem_encrypt.c | 42 ++++++++++++------------------------------
 1 file changed, 12 insertions(+), 30 deletions(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index f1f0a3fa7489..fe7fc1c6eaf7 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -561,8 +561,7 @@ static void __init sme_map_range_decrypted_wp(pgd_t *pgd,
 
 static unsigned long __init sme_pgtable_calc(unsigned long len)
 {
-	unsigned long p4d_size, pud_size, pmd_size, pte_size;
-	unsigned long total;
+	unsigned long entries, tables;
 
 	/*
 	 * Perform a relatively simplistic calculation of the pagetable
@@ -572,42 +571,25 @@ static unsigned long __init sme_pgtable_calc(unsigned long len)
 	 * mappings. Incrementing the count for each covers the case where
 	 * the addresses cross entries.
 	 */
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1;
-		p4d_size *= sizeof(p4d_t) * PTRS_PER_P4D;
-		pud_size = (ALIGN(len, P4D_SIZE) / P4D_SIZE) + 1;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	} else {
-		p4d_size = 0;
-		pud_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	}
-	pmd_size = (ALIGN(len, PUD_SIZE) / PUD_SIZE) + 1;
-	pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD;
-	pte_size = 2 * sizeof(pte_t) * PTRS_PER_PTE;
 
-	total = p4d_size + pud_size + pmd_size + pte_size;
+	/* PGDIR_SIZE is equal to P4D_SIZE on 4-level machine. */
+	if (PTRS_PER_P4D > 1)
+		entries = (DIV_ROUND_UP(len, PGDIR_SIZE) + 1) * sizeof(p4d_t) * PTRS_PER_P4D;
+	entries += (DIV_ROUND_UP(len, P4D_SIZE) + 1) * sizeof(pud_t) * PTRS_PER_PUD;
+	entries += (DIV_ROUND_UP(len, PUD_SIZE) + 1) * sizeof(pmd_t) * PTRS_PER_PMD;
+	entries += 2 * sizeof(pte_t) * PTRS_PER_PTE;
 
 	/*
 	 * Now calculate the added pagetable structures needed to populate
 	 * the new pagetables.
 	 */
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_size = ALIGN(total, PGDIR_SIZE) / PGDIR_SIZE;
-		p4d_size *= sizeof(p4d_t) * PTRS_PER_P4D;
-		pud_size = ALIGN(total, P4D_SIZE) / P4D_SIZE;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	} else {
-		p4d_size = 0;
-		pud_size = ALIGN(total, PGDIR_SIZE) / PGDIR_SIZE;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	}
-	pmd_size = ALIGN(total, PUD_SIZE) / PUD_SIZE;
-	pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD;
 
-	total += p4d_size + pud_size + pmd_size;
+	if (PTRS_PER_P4D > 1)
+		tables = DIV_ROUND_UP(entries, PGDIR_SIZE) * sizeof(p4d_t) * PTRS_PER_P4D;
+	tables += DIV_ROUND_UP(entries, P4D_SIZE) * sizeof(pud_t) * PTRS_PER_PUD;
+	tables += DIV_ROUND_UP(entries, PUD_SIZE) * sizeof(pmd_t) * PTRS_PER_PMD;
 
-	return total;
+	return entries + tables;
 }
 
 void __init sme_encrypt_kernel(struct boot_params *bp)
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] x86/mm/encrypt: Rewrite sme_pgtable_calc()
@ 2017-12-12 11:45   ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-12 11:45 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel,
	Kirill A. Shutemov

sme_pgtable_calc() is unnecessary complex. It can be re-written in a
more stream-lined way.

As a side effect, we would get the code ready to boot-time switching
between paging modes.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/mem_encrypt.c | 42 ++++++++++++------------------------------
 1 file changed, 12 insertions(+), 30 deletions(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index f1f0a3fa7489..fe7fc1c6eaf7 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -561,8 +561,7 @@ static void __init sme_map_range_decrypted_wp(pgd_t *pgd,
 
 static unsigned long __init sme_pgtable_calc(unsigned long len)
 {
-	unsigned long p4d_size, pud_size, pmd_size, pte_size;
-	unsigned long total;
+	unsigned long entries, tables;
 
 	/*
 	 * Perform a relatively simplistic calculation of the pagetable
@@ -572,42 +571,25 @@ static unsigned long __init sme_pgtable_calc(unsigned long len)
 	 * mappings. Incrementing the count for each covers the case where
 	 * the addresses cross entries.
 	 */
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1;
-		p4d_size *= sizeof(p4d_t) * PTRS_PER_P4D;
-		pud_size = (ALIGN(len, P4D_SIZE) / P4D_SIZE) + 1;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	} else {
-		p4d_size = 0;
-		pud_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	}
-	pmd_size = (ALIGN(len, PUD_SIZE) / PUD_SIZE) + 1;
-	pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD;
-	pte_size = 2 * sizeof(pte_t) * PTRS_PER_PTE;
 
-	total = p4d_size + pud_size + pmd_size + pte_size;
+	/* PGDIR_SIZE is equal to P4D_SIZE on 4-level machine. */
+	if (PTRS_PER_P4D > 1)
+		entries = (DIV_ROUND_UP(len, PGDIR_SIZE) + 1) * sizeof(p4d_t) * PTRS_PER_P4D;
+	entries += (DIV_ROUND_UP(len, P4D_SIZE) + 1) * sizeof(pud_t) * PTRS_PER_PUD;
+	entries += (DIV_ROUND_UP(len, PUD_SIZE) + 1) * sizeof(pmd_t) * PTRS_PER_PMD;
+	entries += 2 * sizeof(pte_t) * PTRS_PER_PTE;
 
 	/*
 	 * Now calculate the added pagetable structures needed to populate
 	 * the new pagetables.
 	 */
-	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-		p4d_size = ALIGN(total, PGDIR_SIZE) / PGDIR_SIZE;
-		p4d_size *= sizeof(p4d_t) * PTRS_PER_P4D;
-		pud_size = ALIGN(total, P4D_SIZE) / P4D_SIZE;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	} else {
-		p4d_size = 0;
-		pud_size = ALIGN(total, PGDIR_SIZE) / PGDIR_SIZE;
-		pud_size *= sizeof(pud_t) * PTRS_PER_PUD;
-	}
-	pmd_size = ALIGN(total, PUD_SIZE) / PUD_SIZE;
-	pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD;
 
-	total += p4d_size + pud_size + pmd_size;
+	if (PTRS_PER_P4D > 1)
+		tables = DIV_ROUND_UP(entries, PGDIR_SIZE) * sizeof(p4d_t) * PTRS_PER_P4D;
+	tables += DIV_ROUND_UP(entries, P4D_SIZE) * sizeof(pud_t) * PTRS_PER_PUD;
+	tables += DIV_ROUND_UP(entries, PUD_SIZE) * sizeof(pmd_t) * PTRS_PER_PMD;
 
-	return total;
+	return entries + tables;
 }
 
 void __init sme_encrypt_kernel(struct boot_params *bp)
-- 
2.15.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/3] x86/mm/encrypt: Simplify pgtable helpers
  2017-12-12 11:45 ` Kirill A. Shutemov
@ 2017-12-18 10:16   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-18 10:16 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel

On Tue, Dec 12, 2017 at 02:45:41PM +0300, Kirill A. Shutemov wrote:
> This patchset simplifies sme_populate_pgd(), sme_populate_pgd_large() and
> sme_pgtable_calc() functions.
> 
> As a side effect, the patchset makes encryption code ready to boot-time
> switching between paging modes.
> 
> The patchset is build on top of Tom's "x86: SME: BSP/SME microcode update
> fix" patchset.
> 
> It was only build-tested. Tom, could you please get it tested properly?

Tom, do you have time to take a look?

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/3] x86/mm/encrypt: Simplify pgtable helpers
@ 2017-12-18 10:16   ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2017-12-18 10:16 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	x86, Borislav Petkov, Brijesh Singh, linux-mm, linux-kernel

On Tue, Dec 12, 2017 at 02:45:41PM +0300, Kirill A. Shutemov wrote:
> This patchset simplifies sme_populate_pgd(), sme_populate_pgd_large() and
> sme_pgtable_calc() functions.
> 
> As a side effect, the patchset makes encryption code ready to boot-time
> switching between paging modes.
> 
> The patchset is build on top of Tom's "x86: SME: BSP/SME microcode update
> fix" patchset.
> 
> It was only build-tested. Tom, could you please get it tested properly?

Tom, do you have time to take a look?

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-12-18 10:16 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-12 11:45 [PATCH 0/3] x86/mm/encrypt: Simplify pgtable helpers Kirill A. Shutemov
2017-12-12 11:45 ` Kirill A. Shutemov
2017-12-12 11:45 ` [PATCH 1/3] x86/mm/encrypt: Move sme_populate_pgd*() into separate translation unit Kirill A. Shutemov
2017-12-12 11:45   ` Kirill A. Shutemov
2017-12-12 11:45 ` [PATCH 2/3] x86/mm/encrypt: Rewrite sme_populate_pgd() and sme_populate_pgd_large() Kirill A. Shutemov
2017-12-12 11:45   ` Kirill A. Shutemov
2017-12-12 11:45 ` [PATCH 3/3] x86/mm/encrypt: Rewrite sme_pgtable_calc() Kirill A. Shutemov
2017-12-12 11:45   ` Kirill A. Shutemov
2017-12-18 10:16 ` [PATCH 0/3] x86/mm/encrypt: Simplify pgtable helpers Kirill A. Shutemov
2017-12-18 10:16   ` Kirill A. Shutemov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.