All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] x86: Secure Memory Encryption (SME) fixes 2017-07-26
@ 2017-07-28 16:01 ` Tom Lendacky
  0 siblings, 0 replies; 8+ messages in thread
From: Tom Lendacky @ 2017-07-28 16:01 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: Ingo Molnar, Borislav Petkov, Andy Lutomirski, H. Peter Anvin,
	Thomas Gleixner, Dave Young, Brijesh Singh, kexec

This patch series addresses some issues found during further testing of
Secure Memory Encryption (SME).

The following fixes are included in this update series:

- Fix a cache-related memory corruption when kexec is invoked in
  successive instances
- Remove the encryption mask from the protection properties returned
  by arch_apei_get_mem_attribute() when SME is active

---

This patch series is based off of the master branch of tip:
  https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master

  Commit 8333bcad393c ("Merge branch 'x86/asm'")

Cc: <kexec@lists.infradead.org>

Changes since v1:
- Patch #1:
  - Only issue wbinvd if SME is active
- Patch #2:
  - Create a no encryption version of the PAGE_KERNEL protection type
    and use that in arch_apei_get_mem_attribute()
- General comment and patch description clean up

Tom Lendacky (2):
  x86/mm, kexec: Fix memory corruption with SME on successive kexecs
  acpi, x86: Remove encryption mask from ACPI page protection type

 arch/x86/include/asm/acpi.h          | 11 ++++++-----
 arch/x86/include/asm/kexec.h         |  3 ++-
 arch/x86/include/asm/pgtable_types.h |  1 +
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/relocate_kernel_64.S | 14 ++++++++++++++
 5 files changed, 25 insertions(+), 7 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 0/2] x86: Secure Memory Encryption (SME) fixes 2017-07-26
@ 2017-07-28 16:01 ` Tom Lendacky
  0 siblings, 0 replies; 8+ messages in thread
From: Tom Lendacky @ 2017-07-28 16:01 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: Brijesh Singh, kexec, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, H. Peter Anvin, Thomas Gleixner, Dave Young

This patch series addresses some issues found during further testing of
Secure Memory Encryption (SME).

The following fixes are included in this update series:

- Fix a cache-related memory corruption when kexec is invoked in
  successive instances
- Remove the encryption mask from the protection properties returned
  by arch_apei_get_mem_attribute() when SME is active

---

This patch series is based off of the master branch of tip:
  https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master

  Commit 8333bcad393c ("Merge branch 'x86/asm'")

Cc: <kexec@lists.infradead.org>

Changes since v1:
- Patch #1:
  - Only issue wbinvd if SME is active
- Patch #2:
  - Create a no encryption version of the PAGE_KERNEL protection type
    and use that in arch_apei_get_mem_attribute()
- General comment and patch description clean up

Tom Lendacky (2):
  x86/mm, kexec: Fix memory corruption with SME on successive kexecs
  acpi, x86: Remove encryption mask from ACPI page protection type

 arch/x86/include/asm/acpi.h          | 11 ++++++-----
 arch/x86/include/asm/kexec.h         |  3 ++-
 arch/x86/include/asm/pgtable_types.h |  1 +
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/relocate_kernel_64.S | 14 ++++++++++++++
 5 files changed, 25 insertions(+), 7 deletions(-)

-- 
1.9.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/2] x86/mm, kexec: Fix memory corruption with SME on successive kexecs
  2017-07-28 16:01 ` Tom Lendacky
@ 2017-07-28 16:01   ` Tom Lendacky
  -1 siblings, 0 replies; 8+ messages in thread
From: Tom Lendacky @ 2017-07-28 16:01 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: Ingo Molnar, Borislav Petkov, Andy Lutomirski, H. Peter Anvin,
	Thomas Gleixner, Dave Young, Brijesh Singh, kexec

After issuing successive kexecs it was found that the SHA hash failed
verification when booting the kexec'd kernel.  When SME is enabled, the
change from using pages that were marked encrypted to now being marked as
not encrypted (through new identify mapped page tables) results in memory
corruption if there are any cache entries for the previously encrypted
pages. This is because separate cache entries can exist for the same
physical location but tagged both with and without the encryption bit.

To prevent this, issue a wbinvd if SME is active before copying the pages
from the source location to the destination location to clear any possible
cache entry conflicts.

Cc: <kexec@lists.infradead.org>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kexec.h         |  3 ++-
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/relocate_kernel_64.S | 14 ++++++++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index e8183ac..942c1f4 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -147,7 +147,8 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
 relocate_kernel(unsigned long indirection_page,
 		unsigned long page_list,
 		unsigned long start_address,
-		unsigned int preserve_context);
+		unsigned int preserve_context,
+		unsigned int sme_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 9cf8daa..1f790cf 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -335,7 +335,8 @@ void machine_kexec(struct kimage *image)
 	image->start = relocate_kernel((unsigned long)image->head,
 				       (unsigned long)page_list,
 				       image->start,
-				       image->preserve_context);
+				       image->preserve_context,
+				       sme_active());
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 98111b3..307d3ba 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,6 +47,7 @@ relocate_kernel:
 	 * %rsi page_list
 	 * %rdx start address
 	 * %rcx preserve_context
+	 * %r8  sme_active
 	 */
 
 	/* Save the CPU context, used for jumping back */
@@ -71,6 +72,9 @@ relocate_kernel:
 	pushq $0
 	popfq
 
+	/* Save SME active flag */
+	movq	%r8, %r12
+
 	/*
 	 * get physical address of control page now
 	 * this is impossible after page table switch
@@ -132,6 +136,16 @@ identity_mapped:
 	/* Flush the TLB (needed?) */
 	movq	%r9, %cr3
 
+	/*
+	 * If SME is active, there could be old encrypted cache line
+	 * entries that will conflict with the now unencrypted memory
+	 * used by kexec. Flush the caches before copying the kernel.
+	 */
+	testq	%r12, %r12
+	jz 1f
+	wbinvd
+1:
+
 	movq	%rcx, %r11
 	call	swap_pages
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 1/2] x86/mm, kexec: Fix memory corruption with SME on successive kexecs
@ 2017-07-28 16:01   ` Tom Lendacky
  0 siblings, 0 replies; 8+ messages in thread
From: Tom Lendacky @ 2017-07-28 16:01 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: Brijesh Singh, kexec, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, H. Peter Anvin, Thomas Gleixner, Dave Young

After issuing successive kexecs it was found that the SHA hash failed
verification when booting the kexec'd kernel.  When SME is enabled, the
change from using pages that were marked encrypted to now being marked as
not encrypted (through new identify mapped page tables) results in memory
corruption if there are any cache entries for the previously encrypted
pages. This is because separate cache entries can exist for the same
physical location but tagged both with and without the encryption bit.

To prevent this, issue a wbinvd if SME is active before copying the pages
from the source location to the destination location to clear any possible
cache entry conflicts.

Cc: <kexec@lists.infradead.org>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kexec.h         |  3 ++-
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/relocate_kernel_64.S | 14 ++++++++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index e8183ac..942c1f4 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -147,7 +147,8 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
 relocate_kernel(unsigned long indirection_page,
 		unsigned long page_list,
 		unsigned long start_address,
-		unsigned int preserve_context);
+		unsigned int preserve_context,
+		unsigned int sme_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 9cf8daa..1f790cf 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -335,7 +335,8 @@ void machine_kexec(struct kimage *image)
 	image->start = relocate_kernel((unsigned long)image->head,
 				       (unsigned long)page_list,
 				       image->start,
-				       image->preserve_context);
+				       image->preserve_context,
+				       sme_active());
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 98111b3..307d3ba 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,6 +47,7 @@ relocate_kernel:
 	 * %rsi page_list
 	 * %rdx start address
 	 * %rcx preserve_context
+	 * %r8  sme_active
 	 */
 
 	/* Save the CPU context, used for jumping back */
@@ -71,6 +72,9 @@ relocate_kernel:
 	pushq $0
 	popfq
 
+	/* Save SME active flag */
+	movq	%r8, %r12
+
 	/*
 	 * get physical address of control page now
 	 * this is impossible after page table switch
@@ -132,6 +136,16 @@ identity_mapped:
 	/* Flush the TLB (needed?) */
 	movq	%r9, %cr3
 
+	/*
+	 * If SME is active, there could be old encrypted cache line
+	 * entries that will conflict with the now unencrypted memory
+	 * used by kexec. Flush the caches before copying the kernel.
+	 */
+	testq	%r12, %r12
+	jz 1f
+	wbinvd
+1:
+
 	movq	%rcx, %r11
 	call	swap_pages
 
-- 
1.9.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] acpi, x86: Remove encryption mask from ACPI page protection type
  2017-07-28 16:01 ` Tom Lendacky
  (?)
  (?)
@ 2017-07-28 16:01 ` Tom Lendacky
  2017-07-30 10:35   ` [tip:x86/mm] acpi, x86/mm: " tip-bot for Tom Lendacky
  -1 siblings, 1 reply; 8+ messages in thread
From: Tom Lendacky @ 2017-07-28 16:01 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: Ingo Molnar, Borislav Petkov, Andy Lutomirski, H. Peter Anvin,
	Thomas Gleixner, Dave Young, Brijesh Singh

The function arch_apei_get_mem_attributes() is used to set the page
protection type for ACPI physical addresses. When SME is active, the
associated protection type cannot have the encryption mask set since the
ACPI tables live in un-encrypted memory. Create a new protection type,
PAGE_KERNEL_NOENC, that is a no encryption version of PAGE_KERNEL, and
return that from arch_apei_get_mem_attributes().

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/acpi.h          | 11 ++++++-----
 arch/x86/include/asm/pgtable_types.h |  1 +
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
index 562286f..543a3f0 100644
--- a/arch/x86/include/asm/acpi.h
+++ b/arch/x86/include/asm/acpi.h
@@ -160,12 +160,13 @@ static inline pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr)
 	 * you call efi_mem_attributes() during boot and at runtime,
 	 * you could theoretically see different attributes.
 	 *
-	 * Since we are yet to see any x86 platforms that require
-	 * anything other than PAGE_KERNEL (some arm64 platforms
-	 * require the equivalent of PAGE_KERNEL_NOCACHE), return that
-	 * until we know differently.
+	 * We are yet to see any x86 platforms that require anything
+	 * other than PAGE_KERNEL (some arm64 platforms require the
+	 * equivalent of PAGE_KERNEL_NOCACHE). Additionally, if SME
+	 * is active, the ACPI information will not be encrypted,
+	 * so return PAGE_KERNEL_NOENC until we know differently.
 	 */
-	 return PAGE_KERNEL;
+	return PAGE_KERNEL_NOENC;
 }
 #endif
 
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 6c55973..399261c 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -211,6 +211,7 @@ enum page_cache_mode {
 #define __PAGE_KERNEL_NOENC_WP	(__PAGE_KERNEL_WP)
 
 #define PAGE_KERNEL		__pgprot(__PAGE_KERNEL | _PAGE_ENC)
+#define PAGE_KERNEL_NOENC	__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO		__pgprot(__PAGE_KERNEL_RO | _PAGE_ENC)
 #define PAGE_KERNEL_EXEC	__pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC)
 #define PAGE_KERNEL_EXEC_NOENC	__pgprot(__PAGE_KERNEL_EXEC)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [tip:x86/mm] x86/mm, kexec: Fix memory corruption with SME on successive kexecs
  2017-07-28 16:01   ` Tom Lendacky
@ 2017-07-30 10:35     ` tip-bot for Tom Lendacky
  -1 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Tom Lendacky @ 2017-07-30 10:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, torvalds, mingo, kexec, bp, brijesh.singh, luto, hpa,
	linux-kernel, dyoung, tglx, thomas.lendacky

Commit-ID:  4e237903f95db585b976e7311de2bfdaaf0f6e31
Gitweb:     http://git.kernel.org/tip/4e237903f95db585b976e7311de2bfdaaf0f6e31
Author:     Tom Lendacky <thomas.lendacky@amd.com>
AuthorDate: Fri, 28 Jul 2017 11:01:16 -0500
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 30 Jul 2017 12:09:12 +0200

x86/mm, kexec: Fix memory corruption with SME on successive kexecs

After issuing successive kexecs it was found that the SHA hash failed
verification when booting the kexec'd kernel.  When SME is enabled, the
change from using pages that were marked encrypted to now being marked as
not encrypted (through new identify mapped page tables) results in memory
corruption if there are any cache entries for the previously encrypted
pages. This is because separate cache entries can exist for the same
physical location but tagged both with and without the encryption bit.

To prevent this, issue a wbinvd if SME is active before copying the pages
from the source location to the destination location to clear any possible
cache entry conflicts.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: <kexec@lists.infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e7fb8610af3a93e8f8ae6f214cd9249adc0df2b4.1501186516.git.thomas.lendacky@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/kexec.h         |  3 ++-
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/relocate_kernel_64.S | 14 ++++++++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index e8183ac..942c1f4 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -147,7 +147,8 @@ unsigned long
 relocate_kernel(unsigned long indirection_page,
 		unsigned long page_list,
 		unsigned long start_address,
-		unsigned int preserve_context);
+		unsigned int preserve_context,
+		unsigned int sme_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 9cf8daa..1f790cf 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -335,7 +335,8 @@ void machine_kexec(struct kimage *image)
 	image->start = relocate_kernel((unsigned long)image->head,
 				       (unsigned long)page_list,
 				       image->start,
-				       image->preserve_context);
+				       image->preserve_context,
+				       sme_active());
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 98111b3..307d3ba 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,6 +47,7 @@ relocate_kernel:
 	 * %rsi page_list
 	 * %rdx start address
 	 * %rcx preserve_context
+	 * %r8  sme_active
 	 */
 
 	/* Save the CPU context, used for jumping back */
@@ -71,6 +72,9 @@ relocate_kernel:
 	pushq $0
 	popfq
 
+	/* Save SME active flag */
+	movq	%r8, %r12
+
 	/*
 	 * get physical address of control page now
 	 * this is impossible after page table switch
@@ -132,6 +136,16 @@ identity_mapped:
 	/* Flush the TLB (needed?) */
 	movq	%r9, %cr3
 
+	/*
+	 * If SME is active, there could be old encrypted cache line
+	 * entries that will conflict with the now unencrypted memory
+	 * used by kexec. Flush the caches before copying the kernel.
+	 */
+	testq	%r12, %r12
+	jz 1f
+	wbinvd
+1:
+
 	movq	%rcx, %r11
 	call	swap_pages
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [tip:x86/mm] x86/mm, kexec: Fix memory corruption with SME on successive kexecs
@ 2017-07-30 10:35     ` tip-bot for Tom Lendacky
  0 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Tom Lendacky @ 2017-07-30 10:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: thomas.lendacky, brijesh.singh, peterz, dyoung, kexec,
	linux-kernel, bp, luto, hpa, tglx, torvalds, mingo

Commit-ID:  4e237903f95db585b976e7311de2bfdaaf0f6e31
Gitweb:     http://git.kernel.org/tip/4e237903f95db585b976e7311de2bfdaaf0f6e31
Author:     Tom Lendacky <thomas.lendacky@amd.com>
AuthorDate: Fri, 28 Jul 2017 11:01:16 -0500
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 30 Jul 2017 12:09:12 +0200

x86/mm, kexec: Fix memory corruption with SME on successive kexecs

After issuing successive kexecs it was found that the SHA hash failed
verification when booting the kexec'd kernel.  When SME is enabled, the
change from using pages that were marked encrypted to now being marked as
not encrypted (through new identify mapped page tables) results in memory
corruption if there are any cache entries for the previously encrypted
pages. This is because separate cache entries can exist for the same
physical location but tagged both with and without the encryption bit.

To prevent this, issue a wbinvd if SME is active before copying the pages
from the source location to the destination location to clear any possible
cache entry conflicts.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: <kexec@lists.infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e7fb8610af3a93e8f8ae6f214cd9249adc0df2b4.1501186516.git.thomas.lendacky@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/kexec.h         |  3 ++-
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/relocate_kernel_64.S | 14 ++++++++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index e8183ac..942c1f4 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -147,7 +147,8 @@ unsigned long
 relocate_kernel(unsigned long indirection_page,
 		unsigned long page_list,
 		unsigned long start_address,
-		unsigned int preserve_context);
+		unsigned int preserve_context,
+		unsigned int sme_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 9cf8daa..1f790cf 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -335,7 +335,8 @@ void machine_kexec(struct kimage *image)
 	image->start = relocate_kernel((unsigned long)image->head,
 				       (unsigned long)page_list,
 				       image->start,
-				       image->preserve_context);
+				       image->preserve_context,
+				       sme_active());
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 98111b3..307d3ba 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,6 +47,7 @@ relocate_kernel:
 	 * %rsi page_list
 	 * %rdx start address
 	 * %rcx preserve_context
+	 * %r8  sme_active
 	 */
 
 	/* Save the CPU context, used for jumping back */
@@ -71,6 +72,9 @@ relocate_kernel:
 	pushq $0
 	popfq
 
+	/* Save SME active flag */
+	movq	%r8, %r12
+
 	/*
 	 * get physical address of control page now
 	 * this is impossible after page table switch
@@ -132,6 +136,16 @@ identity_mapped:
 	/* Flush the TLB (needed?) */
 	movq	%r9, %cr3
 
+	/*
+	 * If SME is active, there could be old encrypted cache line
+	 * entries that will conflict with the now unencrypted memory
+	 * used by kexec. Flush the caches before copying the kernel.
+	 */
+	testq	%r12, %r12
+	jz 1f
+	wbinvd
+1:
+
 	movq	%rcx, %r11
 	call	swap_pages
 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [tip:x86/mm] acpi, x86/mm: Remove encryption mask from ACPI page protection type
  2017-07-28 16:01 ` [PATCH v2 2/2] acpi, x86: Remove encryption mask from ACPI page protection type Tom Lendacky
@ 2017-07-30 10:35   ` tip-bot for Tom Lendacky
  0 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Tom Lendacky @ 2017-07-30 10:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, tglx, mingo, dyoung, linux-kernel, luto, torvalds, hpa,
	peterz, thomas.lendacky, brijesh.singh

Commit-ID:  57bd1905b228f2a14d7506b0302f69f425131e57
Gitweb:     http://git.kernel.org/tip/57bd1905b228f2a14d7506b0302f69f425131e57
Author:     Tom Lendacky <thomas.lendacky@amd.com>
AuthorDate: Fri, 28 Jul 2017 11:01:17 -0500
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 30 Jul 2017 12:09:12 +0200

acpi, x86/mm: Remove encryption mask from ACPI page protection type

The arch_apei_get_mem_attributes() function is used to set the page
protection type for ACPI physical addresses. When SME is active, the
associated protection type cannot have the encryption mask set since the
ACPI tables live in un-encrypted memory - the kernel will see corrupted
data.

To fix this, create a new protection type, PAGE_KERNEL_NOENC, that is a
'no encryption' version of PAGE_KERNEL, and return that from
arch_apei_get_mem_attributes().

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e1cb9395b2f061cd96f1e59c3cbbe5ff5d4ec26e.1501186516.git.thomas.lendacky@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/acpi.h          | 11 ++++++-----
 arch/x86/include/asm/pgtable_types.h |  1 +
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
index 562286f..72d867f 100644
--- a/arch/x86/include/asm/acpi.h
+++ b/arch/x86/include/asm/acpi.h
@@ -160,12 +160,13 @@ static inline pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr)
 	 * you call efi_mem_attributes() during boot and at runtime,
 	 * you could theoretically see different attributes.
 	 *
-	 * Since we are yet to see any x86 platforms that require
-	 * anything other than PAGE_KERNEL (some arm64 platforms
-	 * require the equivalent of PAGE_KERNEL_NOCACHE), return that
-	 * until we know differently.
+	 * We are yet to see any x86 platforms that require anything
+	 * other than PAGE_KERNEL (some ARM64 platforms require the
+	 * equivalent of PAGE_KERNEL_NOCACHE). Additionally, if SME
+	 * is active, the ACPI information will not be encrypted,
+	 * so return PAGE_KERNEL_NOENC until we know differently.
 	 */
-	 return PAGE_KERNEL;
+	return PAGE_KERNEL_NOENC;
 }
 #endif
 
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 6c55973..399261c 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -211,6 +211,7 @@ enum page_cache_mode {
 #define __PAGE_KERNEL_NOENC_WP	(__PAGE_KERNEL_WP)
 
 #define PAGE_KERNEL		__pgprot(__PAGE_KERNEL | _PAGE_ENC)
+#define PAGE_KERNEL_NOENC	__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO		__pgprot(__PAGE_KERNEL_RO | _PAGE_ENC)
 #define PAGE_KERNEL_EXEC	__pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC)
 #define PAGE_KERNEL_EXEC_NOENC	__pgprot(__PAGE_KERNEL_EXEC)

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-07-30 10:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-28 16:01 [PATCH v2 0/2] x86: Secure Memory Encryption (SME) fixes 2017-07-26 Tom Lendacky
2017-07-28 16:01 ` Tom Lendacky
2017-07-28 16:01 ` [PATCH v2 1/2] x86/mm, kexec: Fix memory corruption with SME on successive kexecs Tom Lendacky
2017-07-28 16:01   ` Tom Lendacky
2017-07-30 10:35   ` [tip:x86/mm] " tip-bot for Tom Lendacky
2017-07-30 10:35     ` tip-bot for Tom Lendacky
2017-07-28 16:01 ` [PATCH v2 2/2] acpi, x86: Remove encryption mask from ACPI page protection type Tom Lendacky
2017-07-30 10:35   ` [tip:x86/mm] acpi, x86/mm: " tip-bot for Tom Lendacky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.