linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
@ 2018-09-30  3:10 Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 1/4] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
                   ` (4 more replies)
  0 siblings, 5 replies; 30+ messages in thread
From: Lianbo Jiang @ 2018-09-30  3:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

When SME is enabled on AMD machine, it also needs to support kdump. Because
the memory is encrypted in the first kernel, the old memory will be remapped
to kdump kernel for dumping data, and SME is also enabled in kdump kernel,
otherwise the old memory can not be decrypted.

For the kdump, it is necessary to distinguish whether the memory is encrypted.
Furthermore, that should also know which part of the memory is encrypted or
decrypted. It will appropriately remap the memory according to the specific
situation in order to tell cpu how to access the memory.

As we know, a page of memory that is marked as encrypted, which will be
automatically decrypted when read from DRAM, and will also be automatically
encrypted when written to DRAM. If the old memory is encrypted, it has to
remap the old memory with the memory encryption mask, which will automatically
decrypt the old memory when read from DRAM.

For kdump(SME), there are two cases that doesn't support:

 ----------------------------------------------
| first-kernel | second-kernel | kdump support |
|      (mem_encrypt=on|off)    |   (yes|no)    |
|--------------+---------------+---------------|
|     on       |     on        |     yes       |
|     off      |     off       |     yes       |
|     on       |     off       |     no        |
|     off      |     on        |     no        |
|______________|_______________|_______________|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
It is unnecessary to support in this case, because the old memory is
dencrypted, the old memory can be dumped as usual, that doesn't need to
enable SME in kdump kernel. Another, If the scenario must be supported, it
will increase the complexity of the code, that will have to consider how to
pass the SME flag from the first kernel to the kdump kernel, in order to let
kdump kernel know that whether the old memory is encrypted.

There are two methods to pass the SME flag to the kdump kernel. The first
method is to modify the assembly code, which includes some common code and
the path is too long. The second method is to use kexec tool, which could
require the SME flag to be exported in the first kernel by "proc" or "sysfs",
kexec tools will read the SME flag from "proc" or "sysfs" when we use kexec
tools to load image, subsequently the SME flag will be saved in boot_params,
that can properly remap the old memory according to the previously saved SME
flag. But it is too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Test tools:
makedumpfile[v1.6.3]: https://github.com/LianboJ/makedumpfile
commit <e1de103eca8f> "A draft for kdump vmcore about AMD SME"
Note: This patch can only dump vmcore in the case of SME enabled.

crash-7.2.3: https://github.com/crash-utility/crash.git
commit <001f77a05585> "Fix for Linux 4.19-rc1 and later kernels that contain kernel commit <7290d5809571>"

kexec-tools-2.0.17: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
commit <b9de21ef51a7> "kexec: fix for "Unhandled rela relocation: R_X86_64_PLT32" error"

Note:
Before you load the kernel and initramfs for kdump, this patch(
http://lists.infradead.org/pipermail/kexec/2018-September/021460.html) must be merged
to kexec-tools, and then the kdump kernel will work well. Because there is a patch
which is removed based on v6(x86/ioremap: strengthen the logic in
early_memremap_pgprot_adjust() to adjust encryption mask).

Test environment:
HP ProLiant DL385Gen10 AMD EPYC 7251
8-Core Processor
32768 MB memory
600 GB disk space

Linux 4.19-rc5:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
commit <6bf4ca7fbc85> "Linux 4.19-rc5"

Reference:
AMD64 Architecture Programmer's Manual
https://support.amd.com/TechDocs/24593.pdf

Changes since v6:
1. There is a patch which is removed based on v6.
(x86/ioremap: strengthen the logic in early_memremap_pgprot_adjust() to adjust encryption mask)
Dave Young suggests that this patch can be removed and fix the kexec-tools.
Reference: http://lists.infradead.org/pipermail/kexec/2018-September/021460.html)
2. Update the patch log.

Changes since v7:
1. Improve patch log for patch 1/4(Suggested by Baoquan He)
2. Add Reviewed-by for all patches(Tom Lendacky <thomas.lendacky@amd.com>)
3. Add Acked-by for patch 3/4(Joerg Roedel <jroedel@suse.de>)
4. Remove header file(linux/crash_dump.h) from
arch/x86/mm/ioremap.c(Suggested by Borislav)
5. Modify comment and patch log for patch 2/4(Suggested by Borislav)
6. Delete a file arch/x86/kernel/crash_dump_encrypt.c and rewrite some
functions(Suggested by Borislav)
7. Modify all code style issue(Suggested by Borislav)
8. Modify compile error "fs/proc/vmcore.c:115: undefined reference
   to `copy_oldmem_page_encrypted'"

Some known issues:
1. about SME
Upstream kernel will hang on HP machine(DL385Gen10 AMD EPYC 7251) when
we execute the kexec command as follow:

# kexec -l /boot/vmlinuz-4.19.0-rc5+ --initrd=/boot/initramfs-4.19.0-rc5+.img --command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root ro mem_encrypt=on rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap console=ttyS0,115200n81 LANG=en_US.UTF-8 earlyprintk=serial debug nokaslr"
# kexec -e (or reboot)

But this issue can not be reproduced on speedway machine, and this issue
is irrelevant to my posted patches.

The kernel log:
[ 1248.932239] kexec_core: Starting new kernel
early console in extract_kernel
input_data: 0x000000087e91c3b4
input_len: 0x000000000067fcbd
output: 0x000000087d400000
output_len: 0x0000000001b6fa90
kernel_total_size: 0x0000000001a9d000
trampoline_32bit: 0x0000000000099000

Decompressing Linux...
Parsing ELF...        [---Here the system will hang]

Lianbo Jiang (4):
  x86/ioremap: add a function ioremap_encrypted() to remap kdump old
    memory
  kexec: allocate decrypted control pages for kdump in case SME is
    enabled
  iommu/amd: Remap the device table of IOMMU with the memory encryption
    mask for kdump
  kdump/vmcore: support encrypted old memory with SME enabled

 arch/x86/include/asm/io.h       |  2 +
 arch/x86/kernel/crash_dump_64.c | 65 ++++++++++++++++++++++++++++-----
 arch/x86/mm/ioremap.c           | 24 ++++++++----
 drivers/iommu/amd_iommu_init.c  | 14 ++++++-
 fs/proc/vmcore.c                | 24 +++++++++---
 include/linux/crash_dump.h      | 13 +++++++
 kernel/kexec_core.c             | 14 +++++++
 7 files changed, 131 insertions(+), 25 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v8 RESEND 1/4] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory
  2018-09-30  3:10 [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
@ 2018-09-30  3:10 ` Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 2/4] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 30+ messages in thread
From: Lianbo Jiang @ 2018-09-30  3:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

When SME is enabled on AMD machine, the memory is encrypted in the first
kernel. In this case, SME also needs to be enabled in kdump kernel, and
the old memory has to be remapped with the memory encryption mask.

Here we only talk about the case that SME is active in the first kernel,
and only care it's active too in kdump kernel. there are four cases that
need considered.

a. dump vmcore
   It is encrypted in the first kernel, and needs be read out in kdump
   kernel.

b. crash notes
   When dumping vmcore, the people usually need to read the useful
   information from notes, and the notes is also encrypted.

c. iommu device table
   It is allocated by kernel, need fill its pointer into mmio of amd iommu.
   It's encrypted in the first kernel, need read the old content to analyze
   and get useful information.

d. mmio of amd iommu
   Register reported by amd firmware, it's not RAM, which won't be
   encrypted in both the first kernel and kdump kernel.

To achieve the goal, the solution is:
1. add a new bool parameter "encrypted" to __ioremap_caller()
   It is a low level function, and check the newly added parameter, if it's
   true and in kdump kernel, will remap the memory with sme mask.

2. add a new function ioremap_encrypted() to explicitly passed in a "true"
   value for "encrypted".
   For above a, b, c, kdump kernel will call ioremap_encrypted();

3. adjust all existed ioremap wrapper functions, passed in "false" for
   encrypted to make them as before.

   ioremap_encrypted()\
   ioremap_cache()     |
   ioremap_prot()      |
   ioremap_wt()        |->__ioremap_caller()
   ioremap_wc()        |
   ioremap_uc()        |
   ioremap_nocache()  /

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Remove a redundant header file "linux/crash_dump.h".(Suggested by
Borislav)
2. Modify code style issue.(Suggested by Borislav)
3. Improve patch log.(Suggested by Baoquan)

 arch/x86/include/asm/io.h |  2 ++
 arch/x86/mm/ioremap.c     | 24 ++++++++++++++++--------
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 6de64840dd22..b7b0bf36c400 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -192,6 +192,8 @@ extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
 #define ioremap_cache ioremap_cache
 extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, unsigned long prot_val);
 #define ioremap_prot ioremap_prot
+extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size);
+#define ioremap_encrypted ioremap_encrypted
 
 /**
  * ioremap     -   map bus memory into CPU space
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c63a545ec199..24e0920a9b25 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -131,7 +131,8 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-		unsigned long size, enum page_cache_mode pcm, void *caller)
+		unsigned long size, enum page_cache_mode pcm,
+		void *caller, bool encrypted)
 {
 	unsigned long offset, vaddr;
 	resource_size_t last_addr;
@@ -199,7 +200,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	 * resulting mapping.
 	 */
 	prot = PAGE_KERNEL_IO;
-	if (sev_active() && mem_flags.desc_other)
+	if ((sev_active() && mem_flags.desc_other) || encrypted)
 		prot = pgprot_encrypted(prot);
 
 	switch (pcm) {
@@ -291,7 +292,7 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, unsigned long size)
 	enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
 	return __ioremap_caller(phys_addr, size, pcm,
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_nocache);
 
@@ -324,7 +325,7 @@ void __iomem *ioremap_uc(resource_size_t phys_addr, unsigned long size)
 	enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC;
 
 	return __ioremap_caller(phys_addr, size, pcm,
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL_GPL(ioremap_uc);
 
@@ -341,7 +342,7 @@ EXPORT_SYMBOL_GPL(ioremap_uc);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
 	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
-					__builtin_return_address(0));
+					__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wc);
 
@@ -358,14 +359,21 @@ EXPORT_SYMBOL(ioremap_wc);
 void __iomem *ioremap_wt(resource_size_t phys_addr, unsigned long size)
 {
 	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WT,
-					__builtin_return_address(0));
+					__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wt);
 
+void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size)
+{
+	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
+				__builtin_return_address(0), true);
+}
+EXPORT_SYMBOL(ioremap_encrypted);
+
 void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
 {
 	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_cache);
 
@@ -374,7 +382,7 @@ void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
 {
 	return __ioremap_caller(phys_addr, size,
 				pgprot2cachemode(__pgprot(prot_val)),
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_prot);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 RESEND 2/4] kexec: allocate decrypted control pages for kdump in case SME is enabled
  2018-09-30  3:10 [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 1/4] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
@ 2018-09-30  3:10 ` Lianbo Jiang
  2018-10-06 11:46   ` [tip:x86/mm] kexec: Allocate decrypted control pages for kdump if " tip-bot for Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 3/4] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 30+ messages in thread
From: Lianbo Jiang @ 2018-09-30  3:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

When SME is enabled in the first kernel, it needs to allocate decrypted
pages for kdump, because when it boots to the kdump kernel, these pages
won't be accessed encrypted at the initial stage, in order to boot the
kdump kernel in the same manner as originally booted.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Modify comment in the code.(Suggested by Borislav)
2. Improve patch log.(Suggested by Borislav)

 kernel/kexec_core.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 23a83a4da38a..6353daaee7f1 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -471,6 +471,18 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
 		}
 	}
 
+	if (pages) {
+		/*
+		 * For kdump, it needs to ensure that these pages are
+		 * decrypted if SME is enabled.
+		 * By the way, it is unnecessary to call the arch_
+		 * kexec_pre_free_pages(), because these pages are
+		 * reserved memory and once the crash kernel is done,
+		 * it will always remain in these memory until reboot
+		 * or unloading.
+		 */
+		arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+	}
 	return pages;
 }
 
@@ -867,6 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			result  = -ENOMEM;
 			goto out;
 		}
+		arch_kexec_post_alloc_pages(page_address(page), 1, 0);
 		ptr = kmap(page);
 		ptr += maddr & ~PAGE_MASK;
 		mchunk = min_t(size_t, mbytes,
@@ -884,6 +897,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			result = copy_from_user(ptr, buf, uchunk);
 		kexec_flush_icache_page(page);
 		kunmap(page);
+		arch_kexec_pre_free_pages(page_address(page), 1);
 		if (result) {
 			result = -EFAULT;
 			goto out;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 RESEND 3/4] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump
  2018-09-30  3:10 [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 1/4] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 2/4] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
@ 2018-09-30  3:10 ` Lianbo Jiang
  2018-10-06 11:47   ` [tip:x86/mm] iommu/amd: Remap the IOMMU device table " tip-bot for Lianbo Jiang
  2018-09-30  3:10 ` [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
  2018-10-02 11:40 ` [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Borislav Petkov
  4 siblings, 1 reply; 30+ messages in thread
From: Lianbo Jiang @ 2018-09-30  3:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

In kdump kernel, it will copy the device table of IOMMU from the old device
table, which is encrypted when SME is enabled in the first kernel. So the
old device table has to be remapped with the memory encryption mask.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
---
 drivers/iommu/amd_iommu_init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 84b3e4445d46..3931c7de7c69 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -902,12 +902,22 @@ static bool copy_device_table(void)
 		}
 	}
 
-	old_devtb_phys = entry & PAGE_MASK;
+	/*
+	 * When SME is enabled in the first kernel, the entry includes the
+	 * memory encryption mask(sme_me_mask), we must remove the memory
+	 * encryption mask to obtain the true physical address in kdump kernel.
+	 */
+	old_devtb_phys = __sme_clr(entry) & PAGE_MASK;
+
 	if (old_devtb_phys >= 0x100000000ULL) {
 		pr_err("The address of old device table is above 4G, not trustworthy!\n");
 		return false;
 	}
-	old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+	old_devtb = (sme_active() && is_kdump_kernel())
+		    ? (__force void *)ioremap_encrypted(old_devtb_phys,
+							dev_table_size)
+		    : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+
 	if (!old_devtb)
 		return false;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-30  3:10 [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
                   ` (2 preceding siblings ...)
  2018-09-30  3:10 ` [PATCH v8 RESEND 3/4] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
@ 2018-09-30  3:10 ` Lianbo Jiang
  2018-09-30  4:22   ` kbuild test robot
  2018-09-30  8:37   ` [PATCH v9 " lijiang
  2018-10-02 11:40 ` [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Borislav Petkov
  4 siblings, 2 replies; 30+ messages in thread
From: Lianbo Jiang @ 2018-09-30  3:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

In kdump kernel, the old memory needs to be dumped into vmcore file.
If SME is enabled in the first kernel, the old memory has to be
remapped with the memory encryption mask, which will be automatically
decrypted when read from DRAM.

For SME kdump, there are two cases that doesn't support:

 ----------------------------------------------
| first-kernel | second-kernel | kdump support |
|      (mem_encrypt=on|off)    |   (yes|no)    |
|--------------+---------------+---------------|
|     on       |     on        |     yes       |
|     off      |     off       |     yes       |
|     on       |     off       |     no        |
|     off      |     on        |     no        |
|______________|_______________|_______________|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
On the one hand, the old memory is decrypted, the old memory can be dumped
as usual, so SME doesn't need to be enabled in kdump kernel; On the other
hand, it will increase the complexity of the code, because that will have
to consider how to pass the SME flag from the first kernel to the kdump
kernel, it is really too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Delete a file arch/x86/kernel/crash_dump_encrypt.c, and move the
copy_oldmem_page_encrypted() to arch/x86/kernel/crash_dump_64.c, also
rewrite some functions.(Suggested by Borislav)
2. Modify all code style issue.(Suggested by Borislav)
3. Remove a reduntant header file.(Suggested by Borislav)
4. Improve patch log.(Suggested by Borislav)
5. Modify compile error "fs/proc/vmcore.c:115: undefined reference
   to `copy_oldmem_page_encrypted'" 

 arch/x86/kernel/crash_dump_64.c | 65 ++++++++++++++++++++++++++++-----
 fs/proc/vmcore.c                | 24 +++++++++---
 include/linux/crash_dump.h      | 13 +++++++
 3 files changed, 87 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 4f2e0778feac..6adbde592c44 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,7 +12,7 @@
 #include <linux/io.h>
 
 /**
- * copy_oldmem_page - copy one page from "oldmem"
+ * __copy_oldmem_page - copy one page from "old memory encrypted or decrypted"
  * @pfn: page frame number to be copied
  * @buf: target memory address for the copy; this can be in kernel address
  *	space or user address space (see @userbuf)
@@ -20,31 +20,78 @@
  * @offset: offset in bytes into the page (based on pfn) to begin the copy
  * @userbuf: if set, @buf is in user address space, use copy_to_user(),
  *	otherwise @buf is in kernel address space, use memcpy().
+ * @encrypted: if true, the old memory is encrypted.
+ *             if false, the old memory is decrypted.
  *
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "old memory encrypted or decrypted". For this page, there
+ * is no pte mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
  */
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
-		size_t csize, unsigned long offset, int userbuf)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+				  unsigned long offset, int userbuf,
+				  bool encrypted)
 {
 	void  *vaddr;
 
 	if (!csize)
 		return 0;
 
-	vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+	if (encrypted)
+		vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, PAGE_SIZE);
+	else
+		vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+
 	if (!vaddr)
 		return -ENOMEM;
 
 	if (userbuf) {
-		if (copy_to_user(buf, vaddr + offset, csize)) {
-			iounmap(vaddr);
+		if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+			iounmap((void __iomem *)vaddr);
 			return -EFAULT;
 		}
 	} else
 		memcpy(buf, vaddr + offset, csize);
 
 	set_iounmap_nonlazy();
-	iounmap(vaddr);
+	iounmap((void __iomem *)vaddr);
 	return csize;
 }
+
+/**
+ * copy_oldmem_page - copy one page from "old memory decrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "old memory decrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ */
+ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+			 unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
+}
+
+/**
+ * copy_oldmem_page_encrypted - copy one page from "old memory encrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "old memory encrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
+ */
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index cbde728f8ac6..42c32d06f7da 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -24,6 +24,8 @@
 #include <linux/vmalloc.h>
 #include <linux/pagemap.h>
 #include <linux/uaccess.h>
+#include <linux/mem_encrypt.h>
+#include <asm/pgtable.h>
 #include <asm/io.h>
 #include "internal.h"
 
@@ -98,7 +100,8 @@ static int pfn_is_ram(unsigned long pfn)
 
 /* Reads a page from the oldmem device from given offset. */
 static ssize_t read_from_oldmem(char *buf, size_t count,
-				u64 *ppos, int userbuf)
+				u64 *ppos, int userbuf,
+				bool encrypted)
 {
 	unsigned long pfn, offset;
 	size_t nr_bytes;
@@ -120,8 +123,15 @@ static ssize_t read_from_oldmem(char *buf, size_t count,
 		if (pfn_is_ram(pfn) == 0)
 			memset(buf, 0, nr_bytes);
 		else {
-			tmp = copy_oldmem_page(pfn, buf, nr_bytes,
-						offset, userbuf);
+			if (encrypted)
+				tmp = copy_oldmem_page_encrypted(pfn, buf,
+								 nr_bytes,
+								 offset,
+								 userbuf);
+			else
+				tmp = copy_oldmem_page(pfn, buf, nr_bytes,
+						       offset, userbuf);
+
 			if (tmp < 0)
 				return tmp;
 		}
@@ -155,7 +165,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
  */
 ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, false);
 }
 
 /*
@@ -163,7 +173,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
  */
 ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, sme_active());
 }
 
 /*
@@ -173,6 +183,7 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
 				  unsigned long from, unsigned long pfn,
 				  unsigned long size, pgprot_t prot)
 {
+	prot = pgprot_encrypted(prot);
 	return remap_pfn_range(vma, from, pfn, size, prot);
 }
 
@@ -351,7 +362,8 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
 					    m->offset + m->size - *fpos,
 					    buflen);
 			start = m->paddr + *fpos - m->offset;
-			tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
+			tmp = read_from_oldmem(buffer, tsz, &start,
+					       userbuf, sme_active());
 			if (tmp < 0)
 				return tmp;
 			buflen -= tsz;
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..84d8ddcb818e 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,19 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
+#else
+static inline
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return 0;
+}
+#endif
+
 void vmcore_cleanup(void);
 
 /* Architecture code defines this if there are other possible ELF
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-30  3:10 ` [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
@ 2018-09-30  4:22   ` kbuild test robot
  2018-09-30  8:37   ` [PATCH v9 " lijiang
  1 sibling, 0 replies; 30+ messages in thread
From: kbuild test robot @ 2018-09-30  4:22 UTC (permalink / raw)
  To: Lianbo Jiang
  Cc: kbuild-all, linux-kernel, kexec, tglx, mingo, hpa, x86, akpm,
	dan.j.williams, thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp,
	brijesh.singh, dyoung, bhe, jroedel

[-- Attachment #1: Type: text/plain, Size: 2252 bytes --]

Hi Lianbo,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on sof-driver-fuweitax/master]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Lianbo-Jiang/Support-kdump-for-AMD-secure-memory-encryption-SME/20180930-112044
base:   https://github.com/fuweitax/linux master
config: x86_64-randconfig-x005-201839 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

>> arch/x86//kernel/crash_dump_64.c:93:9: error: redefinition of 'copy_oldmem_page_encrypted'
    ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
            ^~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from arch/x86//kernel/crash_dump_64.c:10:0:
   include/linux/crash_dump.h:34:9: note: previous definition of 'copy_oldmem_page_encrypted' was here
    ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
            ^~~~~~~~~~~~~~~~~~~~~~~~~~

vim +/copy_oldmem_page_encrypted +93 arch/x86//kernel/crash_dump_64.c

    78	
    79	/**
    80	 * copy_oldmem_page_encrypted - copy one page from "old memory encrypted"
    81	 * @pfn: page frame number to be copied
    82	 * @buf: target memory address for the copy; this can be in kernel address
    83	 *	space or user address space (see @userbuf)
    84	 * @csize: number of bytes to copy
    85	 * @offset: offset in bytes into the page (based on pfn) to begin the copy
    86	 * @userbuf: if set, @buf is in user address space, use copy_to_user(),
    87	 *	otherwise @buf is in kernel address space, use memcpy().
    88	 *
    89	 * Copy a page from "old memory encrypted". For this page, there is no pte
    90	 * mapped in the current kernel. We stitch up a pte, similar to
    91	 * kmap_atomic.
    92	 */
  > 93	ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 28437 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 4/4] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-30  3:10 ` [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
  2018-09-30  4:22   ` kbuild test robot
@ 2018-09-30  8:37   ` lijiang
  2018-10-01 20:22     ` Borislav Petkov
  2018-10-06 11:47     ` [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted " tip-bot for Lianbo Jiang
  1 sibling, 2 replies; 30+ messages in thread
From: lijiang @ 2018-09-30  8:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

In kdump kernel, the old memory needs to be dumped into vmcore file.
If SME is enabled in the first kernel, the old memory has to be
remapped with the memory encryption mask, which will be automatically
decrypted when read from DRAM.

For SME kdump, there are two cases that doesn't support:

 ----------------------------------------------
| first-kernel | second-kernel | kdump support |
|      (mem_encrypt=on|off)    |   (yes|no)    |
|--------------+---------------+---------------|
|     on       |     on        |     yes       |
|     off      |     off       |     yes       |
|     on       |     off       |     no        |
|     off      |     on        |     no        |
|______________|_______________|_______________|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
On the one hand, the old memory is decrypted, the old memory can be dumped
as usual, so SME doesn't need to be enabled in kdump kernel; On the other
hand, it will increase the complexity of the code, because that will have
to consider how to pass the SME flag from the first kernel to the kdump
kernel, it is really too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Delete a file arch/x86/kernel/crash_dump_encrypt.c, and move the
copy_oldmem_page_encrypted() to arch/x86/kernel/crash_dump_64.c, also
rewrite some functions.(Suggested by Borislav)
2. Modify all code style issue.(Suggested by Borislav)
3. Remove a reduntant header file.(Suggested by Borislav)
4. Improve patch log.(Suggested by Borislav)
5. Modify compile error "fs/proc/vmcore.c:115: undefined reference
   to `copy_oldmem_page_encrypted'"
6. Modify compile error "arch/x86//kernel/crash_dump_64.c:93:9:
   error: redefinition of 'copy_oldmem_page_encrypted'"

 arch/x86/kernel/crash_dump_64.c | 65 ++++++++++++++++++++++++++++-----
 fs/proc/vmcore.c                | 24 +++++++++---
 include/linux/crash_dump.h      | 13 +++++++
 3 files changed, 87 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 4f2e0778feac..6adbde592c44 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,7 +12,7 @@
 #include <linux/io.h>
 
 /**
- * copy_oldmem_page - copy one page from "oldmem"
+ * __copy_oldmem_page - copy one page from "old memory encrypted or decrypted"
  * @pfn: page frame number to be copied
  * @buf: target memory address for the copy; this can be in kernel address
  *	space or user address space (see @userbuf)
@@ -20,31 +20,78 @@
  * @offset: offset in bytes into the page (based on pfn) to begin the copy
  * @userbuf: if set, @buf is in user address space, use copy_to_user(),
  *	otherwise @buf is in kernel address space, use memcpy().
+ * @encrypted: if true, the old memory is encrypted.
+ *             if false, the old memory is decrypted.
  *
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "old memory encrypted or decrypted". For this page, there
+ * is no pte mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
  */
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
-		size_t csize, unsigned long offset, int userbuf)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+				  unsigned long offset, int userbuf,
+				  bool encrypted)
 {
 	void  *vaddr;
 
 	if (!csize)
 		return 0;
 
-	vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+	if (encrypted)
+		vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, PAGE_SIZE);
+	else
+		vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+
 	if (!vaddr)
 		return -ENOMEM;
 
 	if (userbuf) {
-		if (copy_to_user(buf, vaddr + offset, csize)) {
-			iounmap(vaddr);
+		if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+			iounmap((void __iomem *)vaddr);
 			return -EFAULT;
 		}
 	} else
 		memcpy(buf, vaddr + offset, csize);
 
 	set_iounmap_nonlazy();
-	iounmap(vaddr);
+	iounmap((void __iomem *)vaddr);
 	return csize;
 }
+
+/**
+ * copy_oldmem_page - copy one page from "old memory decrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "old memory decrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ */
+ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+			 unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
+}
+
+/**
+ * copy_oldmem_page_encrypted - copy one page from "old memory encrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "old memory encrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
+ */
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index cbde728f8ac6..42c32d06f7da 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -24,6 +24,8 @@
 #include <linux/vmalloc.h>
 #include <linux/pagemap.h>
 #include <linux/uaccess.h>
+#include <linux/mem_encrypt.h>
+#include <asm/pgtable.h>
 #include <asm/io.h>
 #include "internal.h"
 
@@ -98,7 +100,8 @@ static int pfn_is_ram(unsigned long pfn)
 
 /* Reads a page from the oldmem device from given offset. */
 static ssize_t read_from_oldmem(char *buf, size_t count,
-				u64 *ppos, int userbuf)
+				u64 *ppos, int userbuf,
+				bool encrypted)
 {
 	unsigned long pfn, offset;
 	size_t nr_bytes;
@@ -120,8 +123,15 @@ static ssize_t read_from_oldmem(char *buf, size_t count,
 		if (pfn_is_ram(pfn) == 0)
 			memset(buf, 0, nr_bytes);
 		else {
-			tmp = copy_oldmem_page(pfn, buf, nr_bytes,
-						offset, userbuf);
+			if (encrypted)
+				tmp = copy_oldmem_page_encrypted(pfn, buf,
+								 nr_bytes,
+								 offset,
+								 userbuf);
+			else
+				tmp = copy_oldmem_page(pfn, buf, nr_bytes,
+						       offset, userbuf);
+
 			if (tmp < 0)
 				return tmp;
 		}
@@ -155,7 +165,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
  */
 ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, false);
 }
 
 /*
@@ -163,7 +173,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
  */
 ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, sme_active());
 }
 
 /*
@@ -173,6 +183,7 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
 				  unsigned long from, unsigned long pfn,
 				  unsigned long size, pgprot_t prot)
 {
+	prot = pgprot_encrypted(prot);
 	return remap_pfn_range(vma, from, pfn, size, prot);
 }
 
@@ -351,7 +362,8 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
 					    m->offset + m->size - *fpos,
 					    buflen);
 			start = m->paddr + *fpos - m->offset;
-			tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
+			tmp = read_from_oldmem(buffer, tsz, &start,
+					       userbuf, sme_active());
 			if (tmp < 0)
 				return tmp;
 			buflen -= tsz;
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..84d8ddcb818e 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,19 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+#if defined(CONFIG_AMD_MEM_ENCRYPT) || defined(CONFIG_X86_64)
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
+#else
+static inline
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return 0;
+}
+#endif
+
 void vmcore_cleanup(void);
 
 /* Architecture code defines this if there are other possible ELF
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 4/4] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-30  8:37   ` [PATCH v9 " lijiang
@ 2018-10-01 20:22     ` Borislav Petkov
  2018-10-06 11:47     ` [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted " tip-bot for Lianbo Jiang
  1 sibling, 0 replies; 30+ messages in thread
From: Borislav Petkov @ 2018-10-01 20:22 UTC (permalink / raw)
  To: lijiang
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

On Sun, Sep 30, 2018 at 04:37:41PM +0800, lijiang wrote:
> In kdump kernel, the old memory needs to be dumped into vmcore file.
> If SME is enabled in the first kernel, the old memory has to be
> remapped with the memory encryption mask, which will be automatically
> decrypted when read from DRAM.
> 
> For SME kdump, there are two cases that doesn't support:

Get rid of those two cases in the commit message.

> 
>  ----------------------------------------------
> | first-kernel | second-kernel | kdump support |
> |      (mem_encrypt=on|off)    |   (yes|no)    |
> |--------------+---------------+---------------|
> |     on       |     on        |     yes       |
> |     off      |     off       |     yes       |
> |     on       |     off       |     no        |
> |     off      |     on        |     no        |
> |______________|_______________|_______________|
> 
> 1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
> In this case, because the old memory is encrypted, it can't be decrypted.
> The root cause is that the encryption key is not visible to any software
> runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
> eache system reset. That is to say, kdump kernel won't have a chance to
> get the encryption key. So the encrypted memory can not be decrypted
> unless SME is active.
> 
> 2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
> On the one hand, the old memory is decrypted, the old memory can be dumped
> as usual, so SME doesn't need to be enabled in kdump kernel; On the other
> hand, it will increase the complexity of the code, because that will have
> to consider how to pass the SME flag from the first kernel to the kdump
> kernel, it is really too expensive to do this.
> 
> This patches are only for SME kdump, the patches don't support SEV kdump.
> 
> Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

You cannot keep Reviewed-by: tags on patches which you change in a
non-trivial manner.

> ---
> Changes since v7:
> 1. Delete a file arch/x86/kernel/crash_dump_encrypt.c, and move the
> copy_oldmem_page_encrypted() to arch/x86/kernel/crash_dump_64.c, also
> rewrite some functions.(Suggested by Borislav)
> 2. Modify all code style issue.(Suggested by Borislav)
> 3. Remove a reduntant header file.(Suggested by Borislav)
> 4. Improve patch log.(Suggested by Borislav)
> 5. Modify compile error "fs/proc/vmcore.c:115: undefined reference
>    to `copy_oldmem_page_encrypted'"
> 6. Modify compile error "arch/x86//kernel/crash_dump_64.c:93:9:
>    error: redefinition of 'copy_oldmem_page_encrypted'"
> 
>  arch/x86/kernel/crash_dump_64.c | 65 ++++++++++++++++++++++++++++-----
>  fs/proc/vmcore.c                | 24 +++++++++---
>  include/linux/crash_dump.h      | 13 +++++++
>  3 files changed, 87 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
> index 4f2e0778feac..6adbde592c44 100644
> --- a/arch/x86/kernel/crash_dump_64.c
> +++ b/arch/x86/kernel/crash_dump_64.c
> @@ -12,7 +12,7 @@
>  #include <linux/io.h>
>  
>  /**
> - * copy_oldmem_page - copy one page from "oldmem"
> + * __copy_oldmem_page - copy one page from "old memory encrypted or decrypted"

Dammit, what's it with those "old memory encrypted or decrypted" in quotation
marks?! What is wrong with simply saying:

Copy one page of the old kernel's memory. If @encrypted is set, the old
memory will be remapped with the encryption mask.

How hard is that?!

>   * @pfn: page frame number to be copied
>   * @buf: target memory address for the copy; this can be in kernel address
>   *	space or user address space (see @userbuf)
> @@ -20,31 +20,78 @@
>   * @offset: offset in bytes into the page (based on pfn) to begin the copy
>   * @userbuf: if set, @buf is in user address space, use copy_to_user(),
>   *	otherwise @buf is in kernel address space, use memcpy().
> + * @encrypted: if true, the old memory is encrypted.
> + *             if false, the old memory is decrypted.
>   *
> - * Copy a page from "oldmem". For this page, there is no pte mapped
> - * in the current kernel. We stitch up a pte, similar to kmap_atomic.
> + * Copy a page from "old memory encrypted or decrypted". For this page, there
> + * is no pte mapped in the current kernel. We stitch up a pte, similar to
> + * kmap_atomic.
>   */

This function is static now - why does it need to keep the comments
above it? And you've duplicated almost the same comment *three* times
now. Why?

Have the whole comment *once* and only one line sentences over the other
functions explaining the difference only.

> -ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
> -		size_t csize, unsigned long offset, int userbuf)
> +static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
> +				  unsigned long offset, int userbuf,
> +				  bool encrypted)
>  {
>  	void  *vaddr;

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-09-30  3:10 [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
                   ` (3 preceding siblings ...)
  2018-09-30  3:10 ` [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
@ 2018-10-02 11:40 ` Borislav Petkov
  2018-10-03  3:57   ` lijiang
  4 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-02 11:40 UTC (permalink / raw)
  To: Lianbo Jiang
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

On Sun, Sep 30, 2018 at 11:10:29AM +0800, Lianbo Jiang wrote:
> When SME is enabled on AMD machine, it also needs to support kdump. Because

Ok, I've cleaned them up heavily and pushed them here:

https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git, branch rc6+0-sme-kdump

However, testing on my zen box doesn't go as planned. And this is even before
your patches.

I load the kdump kernel with kexec-tools from the git-repo + the patch you
mention:

# ~/bpetkov/bin/sbin/kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd /boot/initrd-4.19.0-rc6+ --command-line="root=/dev/mapper/ubuntu--010236012132--vg-leap15 splash=silent showopts console=ttyS5,115200 console=tty0 debug ignore_loglevel log_buf_len=16M 1 irqpoll maxcpus=1 reset_devices vga=normal"

verify that it has been loaded:

# grep . /sys/kernel/kexec_*
/sys/kernel/kexec_crash_loaded:1
/sys/kernel/kexec_crash_size:268435456
/sys/kernel/kexec_loaded:0

and then trigger the panic:

# echo c > /proc/sysrq-trigger

and I see the panic happening in the serial console but then nothing.
The box resets instead.

So something's still broken.

Trying the kexec -l/kexec -e game works - the second kernel gets kexeced
properly.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-02 11:40 ` [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Borislav Petkov
@ 2018-10-03  3:57   ` lijiang
  2018-10-03 11:34     ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: lijiang @ 2018-10-03  3:57 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

在 2018年10月02日 19:40, Borislav Petkov 写道:
> On Sun, Sep 30, 2018 at 11:10:29AM +0800, Lianbo Jiang wrote:
>> When SME is enabled on AMD machine, it also needs to support kdump. Because
> 
> Ok, I've cleaned them up heavily and pushed them here:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git, branch rc6+0-sme-kdump
> 
> However, testing on my zen box doesn't go as planned. And this is even before
> your patches.
> 
> I load the kdump kernel with kexec-tools from the git-repo + the patch you
> mention:
> 
> # ~/bpetkov/bin/sbin/kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd /boot/initrd-4.19.0-rc6+ --command-line="root=/dev/mapper/ubuntu--010236012132--vg-leap15 splash=silent showopts console=ttyS5,115200 console=tty0 debug ignore_loglevel log_buf_len=16M 1 irqpoll maxcpus=1 reset_devices vga=normal"
> 
> verify that it has been loaded:
> 
> # grep . /sys/kernel/kexec_*
> /sys/kernel/kexec_crash_loaded:1
> /sys/kernel/kexec_crash_size:268435456
> /sys/kernel/kexec_loaded:0
> 
> and then trigger the panic:
> 
> # echo c > /proc/sysrq-trigger
> 
> and I see the panic happening in the serial console but then nothing.
> The box resets instead.
> 
> So something's still broken.
> 

Sorry for my late reply because of a holiday.

I noticed that your test was based on [patch v8 RESEND 4/4], could you please
test it based on [patch v9 4/4]? Because the [patch v8 RESEND 4/4] had a compile
error, and that had been fixed in [patch v9 4/4].

Or i improve patch log and comment in the code for [patch v9 4/4] based on your
comments, and post the series again, also provide my test result about the series. 
Do you think about?

Thanks.
Lianbo

> Trying the kexec -l/kexec -e game works - the second kernel gets kexeced
> properly.
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-03  3:57   ` lijiang
@ 2018-10-03 11:34     ` Borislav Petkov
  2018-10-04  9:33       ` lijiang
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-03 11:34 UTC (permalink / raw)
  To: lijiang
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

On Wed, Oct 03, 2018 at 11:57:59AM +0800, lijiang wrote:
> I noticed that your test was based on [patch v8 RESEND 4/4],

How did you notice that?

Let's see, the patch in question is this one:

https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git/commit/?h=rc6%2b0-sme-kdump&id=4a0f2adf6cf374ed3e742134e40591ea33d55b05

and it has a Link tag:

Link: https://lkml.kernel.org/r/be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com

Now let's open that link tag. I don't know about you but my browser says:

https://lore.kernel.org/lkml/be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com/T/#u

which points to

Subject: Re: [PATCH v9 4/4] kdump/vmcore: support encrypted old memory with SME enabled

Looking at that mail, its message id is:

Message-ID: <be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com>

It looks to me it is already v9, no?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-03 11:34     ` Borislav Petkov
@ 2018-10-04  9:33       ` lijiang
  2018-10-04 19:02         ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: lijiang @ 2018-10-04  9:33 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

在 2018年10月03日 19:34, Borislav Petkov 写道:
> On Wed, Oct 03, 2018 at 11:57:59AM +0800, lijiang wrote:
>> I noticed that your test was based on [patch v8 RESEND 4/4],
> 
> How did you notice that?
> 
> Let's see, the patch in question is this one:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git/commit/?h=rc6%2b0-sme-kdump&id=4a0f2adf6cf374ed3e742134e40591ea33d55b05
> 
> and it has a Link tag:
> 
> Link: https://lkml.kernel.org/r/be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com
> 
> Now let's open that link tag. I don't know about you but my browser says:
> 
> https://lore.kernel.org/lkml/be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com/T/#u
> 
> which points to
> 
> Subject: Re: [PATCH v9 4/4] kdump/vmcore: support encrypted old memory with SME enabled
> 
> Looking at that mail, its message id is:
> 
> Message-ID: <be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com>
> 
> It looks to me it is already v9, no?
> 

According to your description, it seems that the patch is v9. In fact, there is only
different the content of header file between [patch v8 RESEND 4/4] and [patch v9 4/4].

diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..84d8ddcb818e 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,19 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+#if defined(CONFIG_AMD_MEM_ENCRYPT) || defined(CONFIG_X86_64)
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
+#else
+static inline
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return 0;
+}
+#endif
+


I have tested the patch again based on upstream 4.19.0-rc6, it works very well.

I'm not sure whether your machine has also SME feature. If it has no SME feature and
previously kdump could work well, after we apply these patches, kdump should be able
to work properly too. I suggest a comparison test for them. Because it is similar to
the situation that SME is disabled.

If your machine has SME feature and SME is also enabled, these patches should be applied
before we test kdump, otherwise kdump won't work.

This was your command that loaded crash kernel and initrd:
# ~/bpetkov/bin/sbin/kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd /boot/initrd-4.19.0-rc6+ --command-line="root=/dev/mapper/ubuntu--010236012132--vg-leap15 splash=silent showopts console=ttyS5,115200 console=tty0 debug ignore_loglevel log_buf_len=16M 1 irqpoll maxcpus=1 reset_devices vga=normal"

If this option(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT) was not enabled by default,
that just tested the case which SME was disabled(because it had no "mem_encrypt=on"
in the kernel command-line). As previously mentioned, maybe it is necessary to do
a comparison test for them.

I have no test environment for Ubuntu. Would you like to share the panic log? 

Thanks.
Lianbo

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-04  9:33       ` lijiang
@ 2018-10-04 19:02         ` Borislav Petkov
  2018-10-05  5:52           ` lijiang
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-04 19:02 UTC (permalink / raw)
  To: lijiang
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

On Thu, Oct 04, 2018 at 05:33:14PM +0800, lijiang wrote:
> I have tested the patch again based on upstream 4.19.0-rc6, it works very well.

How have you tested this?

Please describe the steps in detail.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-04 19:02         ` Borislav Petkov
@ 2018-10-05  5:52           ` lijiang
  2018-10-06  9:56             ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: lijiang @ 2018-10-05  5:52 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

在 2018年10月05日 03:02, Borislav Petkov 写道:
> On Thu, Oct 04, 2018 at 05:33:14PM +0800, lijiang wrote:
>> I have tested the patch again based on upstream 4.19.0-rc6, it works very well.
> 
> How have you tested this?
> 
> Please describe the steps in detail.
> 

There are eight steps:

Step 1: prepare for test tools, you might refer to the cover-letter.
     a. makedumpfile
     b. crash-7.2.3
     c. kexec-tools-2.0.17

     Compile and install these test tools.

Step 2: make sure that the kernel option is enabled if this machine has SME feature.
        CONFIG_AMD_MEM_ENCRYPT=y 

Step 3: apply these patches based on upstream v4.19-rc6, compile and install kernel
        #git am xxxx.patch
        #make ARCH=x86_64 -j32
        #make ARCH=x86_64 modules_install -j32
        #make ARCH=x86_64 install

Step 4: configure kdump and modify some parameters for SME
     a. configure kdump.conf
        #cat /etc/kdump.conf
        path /var/crash
        core_collector makedumpfile -l --message-level 1 -d 31

     b. add the parameter "mem_encrypt=on" for kernel command-line to grub.cfg, if
        this machine has SME feature. And also add crashkernel=xx, which will reserve
        memory for kdump.

Step 5: reboot, and then load the crash kernel image and kdump initramfs.

     a: When SME is enabled, i use this command to load them:

        #kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd=/boot/initramfs-4.19.0-rc6+kdump.img --command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root ro rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap mem_encrypt=on console=ttyS0,115200n81 LANG=en_US.UTF-8 earlyprintk=serial debug irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never disable_cpu_apicid=0"

     b: When SME is disabled, i use this command to load them:

        #kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd=/boot/initramfs-4.19.0-rc6+kdump.img --command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root ro rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap mem_encrypt=off console=ttyS0,115200n81 LANG=en_US.UTF-8 earlyprintk=serial debug irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never disable_cpu_apicid=0"

Step 6: trigger panic by sysrq
        #echo c > /proc/sysrq-trigger

Step 7: check whether the vmcore has been created.

[root@hp-dl385g10-03 linux]# ls -al /var/crash/*
/var/crash/127.0.0.1-2018-10-05-01:20:20:
drwxr-xr-x. 2 root root         44 10月  5 01:20 .
drwxr-xr-x. 3 root root        107 10月  5 01:20 ..
-rw-------. 1 root root 1179265928 10月  5 01:20 vmcore
-rw-r--r--. 1 root root     126571 10月  5 01:20 vmcore-dmesg.txt

/var/crash/127.0.0.1-2018-10-05-01:35:21:
drwxr-xr-x. 2 root root         44 10月  5 01:35 .
drwxr-xr-x. 4 root root        144 10月  5 01:35 ..
-rw-------. 1 root root 1084270120 10月  5 01:35 vmcore
-rw-r--r--. 1 root root     125578 10月  5 01:35 vmcore-dmesg.txt

Step 8: check whether the crash tool can parse the vmcore
     a. When SME is enabled.
        #crash vmlinux /var/crash/127.0.0.1-2018-10-05-01\:20\:20/vmcore

crash 7.2.3++
Copyright (C) 2002-2017  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.
 
GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

WARNING: kernel relocated [308MB]: patching 85986 gdb minimal_symbol values

      KERNEL: vmlinux                                                  
    DUMPFILE: /var/crash/127.0.0.1-2018-10-05-01:20:20/vmcore  [PARTIAL DUMP]
        CPUS: 32
        DATE: Fri Oct  5 01:19:40 2018
      UPTIME: 00:04:04
LOAD AVERAGE: 0.18, 0.33, 0.16
       TASKS: 462
    NODENAME: hp-dl385g10-03.lab.eng.pek2.redhat.com
     RELEASE: 4.19.0-rc6+
     VERSION: #223 SMP Fri Oct 5 01:05:56 EDT 2018
     MACHINE: x86_64  (2095 Mhz)
      MEMORY: 31.8 GB
       PANIC: "sysrq: SysRq : Trigger a crash"
         PID: 9451
     COMMAND: "bash"
        TASK: ffff9d53c5f8c500  [THREAD_INFO: ffff9d53c5f8c500]
         CPU: 26
       STATE: TASK_RUNNING (SYSRQ)

crash> log
[    0.000000] Linux version 4.19.0-rc6+ (root@hp-dl385g10-03.lab.eng.pek2.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-34) (GCC)) #223 SMP Fri Oct 5 01:05:56 EDT 2018
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-4.19.0-rc6+ root=/dev/mapper/rhel_hp--dl385g10--03-root ro mem_encrypt=on crashkernel=2G,high rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap console=ttyS0,115200n81 LANG=en_US.UTF-8
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000008bfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000008c000-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000029920fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000029921000-0x0000000029921fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000029922000-0x0000000062242fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000062243000-0x0000000062342fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000062343000-0x0000000062355fff] ACPI data
[    0.000000] BIOS-e820: [mem 0x0000000062356000-0x0000000062356fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000062357000-0x00000000623d5fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000623d6000-0x0000000062615fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000062616000-0x0000000062637fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000062638000-0x0000000062697fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000062698000-0x0000000062757fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000062758000-0x0000000062758fff] ACPI data
[    0.000000] BIOS-e820: [mem 0x0000000062759000-0x0000000062789fff] usable
[    0.000000] BIOS-e820: [mem 0x000000006278a000-0x000000006278cfff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000006278d000-0x00000000627d6fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000627d7000-0x00000000627d7fff] ACPI data
[    0.000000] BIOS-e820: [mem 0x00000000627d8000-0x000000006286afff] usable
[    0.000000] BIOS-e820: [mem 0x000000006286b000-0x000000006286efff] reserved
[    0.000000] BIOS-e820: [mem 0x000000006286f000-0x00000000682f8fff] usable
...
...
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [ttyS0] enabled
[    0.000000] AMD Secure Memory Encryption (SME) active
[    0.000000] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl
[    0.000000] ACPI: Core revision 20180810
...
...

     b. When SME is disabled.
        #crash vmlinux /var/crash/127.0.0.1-2018-10-05-01\:35\:21/vmcore

crash 7.2.3++
Copyright (C) 2002-2017  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.
 
GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

WARNING: kernel relocated [576MB]: patching 85986 gdb minimal_symbol values

      KERNEL: vmlinux                                                  
    DUMPFILE: /var/crash/127.0.0.1-2018-10-05-01:35:21/vmcore  [PARTIAL DUMP]
        CPUS: 32
        DATE: Fri Oct  5 01:34:44 2018
      UPTIME: 00:01:43
LOAD AVERAGE: 0.31, 0.20, 0.08
       TASKS: 456
    NODENAME: hp-dl385g10-03.lab.eng.pek2.redhat.com
     RELEASE: 4.19.0-rc6+
     VERSION: #223 SMP Fri Oct 5 01:05:56 EDT 2018
     MACHINE: x86_64  (2095 Mhz)
      MEMORY: 31.8 GB
       PANIC: "sysrq: SysRq : Trigger a crash"
         PID: 2093
     COMMAND: "bash"
        TASK: ffff9be9aa062e00  [THREAD_INFO: ffff9be9aa062e00]
         CPU: 12
       STATE: TASK_RUNNING (SYSRQ)

crash> log
[    0.000000] Linux version 4.19.0-rc6+ (root@hp-dl385g10-03.lab.eng.pek2.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-34) (GCC)) #223 SMP Fri Oct 5 01:05:56 EDT 2018
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-4.19.0-rc6+ root=/dev/mapper/rhel_hp--dl385g10--03-root ro mem_encrypt=off crashkernel=2G,high rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap console=ttyS0,115200n81 LANG=en_US.UTF-8
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000008bfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000008c000-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000029920fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000029921000-0x0000000029921fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000029922000-0x00000000622dbfff] usable
[    0.000000] BIOS-e820: [mem 0x00000000622dc000-0x000000006261bfff] reserved
[    0.000000] BIOS-e820: [mem 0x000000006261c000-0x000000006263dfff] usable
[    0.000000] BIOS-e820: [mem 0x000000006263e000-0x000000006269dfff] reserved
[    0.000000] BIOS-e820: [mem 0x000000006269e000-0x00000000627d9fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000627da000-0x00000000627ecfff] ACPI data
[    0.000000] BIOS-e820: [mem 0x00000000627ed000-0x00000000627edfff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x00000000627ee000-0x00000000627f1fff] ACPI data
[    0.000000] BIOS-e820: [mem 0x00000000627f2000-0x00000000627f3fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000627f4000-0x00000000627f4fff] ACPI data
[    0.000000] BIOS-e820: [mem 0x00000000627f5000-0x000000006286afff] usable
[    0.000000] BIOS-e820: [mem 0x000000006286b000-0x000000006286efff] reserved
[    0.000000] BIOS-e820: [mem 0x000000006286f000-0x00000000682f8fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000682f9000-0x0000000068b05fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000068b06000-0x0000000068b09fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000068b0a000-0x0000000068b1afff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000068b1b000-0x0000000068b1dfff] ACPI NVS
...
...
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [ttyS0] enabled
[    0.000000] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl
[    0.000000] ACPI: Core revision 20180810
...
...

Regards,
Lianbo

> Thx.
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-05  5:52           ` lijiang
@ 2018-10-06  9:56             ` Borislav Petkov
  2018-10-07  6:09               ` lijiang
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-06  9:56 UTC (permalink / raw)
  To: lijiang
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

On Fri, Oct 05, 2018 at 01:52:26PM +0800, lijiang wrote:
>      b. add the parameter "mem_encrypt=on" for kernel command-line to grub.cfg, if
>         this machine has SME feature. And also add crashkernel=xx, which will reserve
>         memory for kdump.

Ok, I'm doing the simpler crashkernel= cmdline:

crashkernel=256M

That says:

[    0.011918] Reserving 256MB of memory at 640MB for crashkernel (System RAM: 262030MB)

> Step 5: reboot, and then load the crash kernel image and kdump initramfs.
> 
>      a: When SME is enabled, i use this command to load them:
> 
>         #kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd=/boot/initramfs-4.19.0-rc6+kdump.img --command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root ro rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap mem_encrypt=on console=ttyS0,115200n81 LANG=en_US.UTF-8 earlyprintk=serial debug irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never disable_cpu_apicid=0"

Ok, did that, my cmdline is:

~/bpetkov/src/kexec-tools/build/sbin/kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd=/boot/initrd-4.19.0-rc6+ --command-line="root=/dev/mapper/ubuntu--010236012132--vg-leap15 rd.lvm.lv=ubuntu--010236012132--vg-leap15/root rd.lvm.lv=ubuntu--010236012132--vg-leap15/swap splash=silent showopts console=ttyS5,115200 console=tty0 debug ignore_loglevel log_buf_len=16M nr_cpus=1 irqpoll maxcpus=1 reset_devices vga=normal mem_encrypt=on LANG=en_US.UTF-8 earlyprintk=serial cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never disable_cpu_apicid=0"

Verified it loaded ok:

$ grep . /sys/kernel/kexec_*
/sys/kernel/kexec_crash_loaded:1
/sys/kernel/kexec_crash_size:268435456
/sys/kernel/kexec_loaded:0

> Step 6: trigger panic by sysrq
>         #echo c > /proc/sysrq-trigger

Did that and I got into the kdump kernel with SME. So I'd guess your kdump
kernel command line was needed - I was missing a bunch of switches and
remote-debugging a box kexecing is not fun.

So thanks a lot for the detailed steps, I'm putting them to my notes.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [tip:x86/mm] kexec: Allocate decrypted control pages for kdump if SME is enabled
  2018-09-30  3:10 ` [PATCH v8 RESEND 2/4] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
@ 2018-10-06 11:46   ` tip-bot for Lianbo Jiang
  0 siblings, 0 replies; 30+ messages in thread
From: tip-bot for Lianbo Jiang @ 2018-10-06 11:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: thomas.lendacky, hpa, bp, mingo, lijiang, tglx, linux-kernel

Commit-ID:  9cf38d5559e813cccdba8b44c82cc46ba48d0896
Gitweb:     https://git.kernel.org/tip/9cf38d5559e813cccdba8b44c82cc46ba48d0896
Author:     Lianbo Jiang <lijiang@redhat.com>
AuthorDate: Sun, 30 Sep 2018 11:10:31 +0800
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Sat, 6 Oct 2018 12:01:51 +0200

kexec: Allocate decrypted control pages for kdump if SME is enabled

When SME is enabled in the first kernel, it needs to allocate decrypted
pages for kdump because when the kdump kernel boots, these pages need to
be accessed decrypted in the initial boot stage, before SME is enabled.

 [ bp: clean up text. ]

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kexec@lists.infradead.org
Cc: tglx@linutronix.de
Cc: mingo@redhat.com
Cc: hpa@zytor.com
Cc: akpm@linux-foundation.org
Cc: dan.j.williams@intel.com
Cc: bhelgaas@google.com
Cc: baiyaowei@cmss.chinamobile.com
Cc: tiwai@suse.de
Cc: brijesh.singh@amd.com
Cc: dyoung@redhat.com
Cc: bhe@redhat.com
Cc: jroedel@suse.de
Link: https://lkml.kernel.org/r/20180930031033.22110-3-lijiang@redhat.com
---
 kernel/kexec_core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 23a83a4da38a..86ef06d3dbe3 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -471,6 +471,10 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
 		}
 	}
 
+	/* Ensure that these pages are decrypted if SME is enabled. */
+	if (pages)
+		arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+
 	return pages;
 }
 
@@ -867,6 +871,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			result  = -ENOMEM;
 			goto out;
 		}
+		arch_kexec_post_alloc_pages(page_address(page), 1, 0);
 		ptr = kmap(page);
 		ptr += maddr & ~PAGE_MASK;
 		mchunk = min_t(size_t, mbytes,
@@ -884,6 +889,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			result = copy_from_user(ptr, buf, uchunk);
 		kexec_flush_icache_page(page);
 		kunmap(page);
+		arch_kexec_pre_free_pages(page_address(page), 1);
 		if (result) {
 			result = -EFAULT;
 			goto out;

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [tip:x86/mm] iommu/amd: Remap the IOMMU device table with the memory encryption mask for kdump
  2018-09-30  3:10 ` [PATCH v8 RESEND 3/4] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
@ 2018-10-06 11:47   ` tip-bot for Lianbo Jiang
  0 siblings, 0 replies; 30+ messages in thread
From: tip-bot for Lianbo Jiang @ 2018-10-06 11:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, tglx, jroedel, thomas.lendacky, bp, mingo, hpa, lijiang

Commit-ID:  8780158cf977ea5f9912931a30b3d575b36dba22
Gitweb:     https://git.kernel.org/tip/8780158cf977ea5f9912931a30b3d575b36dba22
Author:     Lianbo Jiang <lijiang@redhat.com>
AuthorDate: Sun, 30 Sep 2018 11:10:32 +0800
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Sat, 6 Oct 2018 12:08:24 +0200

iommu/amd: Remap the IOMMU device table with the memory encryption mask for kdump

The kdump kernel copies the IOMMU device table from the old device table
which is encrypted when SME is enabled in the first kernel. So remap the
old device table with the memory encryption mask in the kdump kernel.

 [ bp: Massage commit message. ]

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Cc: kexec@lists.infradead.org
Cc: tglx@linutronix.de
Cc: mingo@redhat.com
Cc: hpa@zytor.com
Cc: akpm@linux-foundation.org
Cc: dan.j.williams@intel.com
Cc: bhelgaas@google.com
Cc: baiyaowei@cmss.chinamobile.com
Cc: tiwai@suse.de
Cc: brijesh.singh@amd.com
Cc: dyoung@redhat.com
Cc: bhe@redhat.com
Link: https://lkml.kernel.org/r/20180930031033.22110-4-lijiang@redhat.com
---
 drivers/iommu/amd_iommu_init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 84b3e4445d46..3931c7de7c69 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -902,12 +902,22 @@ static bool copy_device_table(void)
 		}
 	}
 
-	old_devtb_phys = entry & PAGE_MASK;
+	/*
+	 * When SME is enabled in the first kernel, the entry includes the
+	 * memory encryption mask(sme_me_mask), we must remove the memory
+	 * encryption mask to obtain the true physical address in kdump kernel.
+	 */
+	old_devtb_phys = __sme_clr(entry) & PAGE_MASK;
+
 	if (old_devtb_phys >= 0x100000000ULL) {
 		pr_err("The address of old device table is above 4G, not trustworthy!\n");
 		return false;
 	}
-	old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+	old_devtb = (sme_active() && is_kdump_kernel())
+		    ? (__force void *)ioremap_encrypted(old_devtb_phys,
+							dev_table_size)
+		    : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+
 	if (!old_devtb)
 		return false;
 

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-09-30  8:37   ` [PATCH v9 " lijiang
  2018-10-01 20:22     ` Borislav Petkov
@ 2018-10-06 11:47     ` tip-bot for Lianbo Jiang
  2018-10-07  5:55       ` lijiang
  1 sibling, 1 reply; 30+ messages in thread
From: tip-bot for Lianbo Jiang @ 2018-10-06 11:47 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, lijiang, bp, hpa, tglx, mingo

Commit-ID:  992b649a3f013465d8128da02e5449def662a4c3
Gitweb:     https://git.kernel.org/tip/992b649a3f013465d8128da02e5449def662a4c3
Author:     Lianbo Jiang <lijiang@redhat.com>
AuthorDate: Sun, 30 Sep 2018 16:37:41 +0800
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Sat, 6 Oct 2018 12:09:26 +0200

kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled

In the kdump kernel, the memory of the first kernel needs to be dumped
into the vmcore file.

If SME is enabled in the first kernel, the old memory has to be remapped
with the memory encryption mask in order to access it properly.

Split copy_oldmem_page() functionality to handle encrypted memory
properly.

 [ bp: Heavily massage everything. ]

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: kexec@lists.infradead.org
Cc: tglx@linutronix.de
Cc: mingo@redhat.com
Cc: hpa@zytor.com
Cc: akpm@linux-foundation.org
Cc: dan.j.williams@intel.com
Cc: bhelgaas@google.com
Cc: baiyaowei@cmss.chinamobile.com
Cc: tiwai@suse.de
Cc: brijesh.singh@amd.com
Cc: dyoung@redhat.com
Cc: bhe@redhat.com
Cc: jroedel@suse.de
Link: https://lkml.kernel.org/r/be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com
---
 arch/x86/kernel/crash_dump_64.c | 60 ++++++++++++++++++++++++++++-------------
 fs/proc/vmcore.c                | 24 ++++++++++++-----
 include/linux/crash_dump.h      |  4 +++
 3 files changed, 63 insertions(+), 25 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 4f2e0778feac..eb8ab3915268 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -11,40 +11,62 @@
 #include <linux/uaccess.h>
 #include <linux/io.h>
 
-/**
- * copy_oldmem_page - copy one page from "oldmem"
- * @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- *	space or user address space (see @userbuf)
- * @csize: number of bytes to copy
- * @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- *	otherwise @buf is in kernel address space, use memcpy().
- *
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
- */
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
-		size_t csize, unsigned long offset, int userbuf)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+				  unsigned long offset, int userbuf,
+				  bool encrypted)
 {
 	void  *vaddr;
 
 	if (!csize)
 		return 0;
 
-	vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+	if (encrypted)
+		vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, PAGE_SIZE);
+	else
+		vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+
 	if (!vaddr)
 		return -ENOMEM;
 
 	if (userbuf) {
-		if (copy_to_user(buf, vaddr + offset, csize)) {
-			iounmap(vaddr);
+		if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+			iounmap((void __iomem *)vaddr);
 			return -EFAULT;
 		}
 	} else
 		memcpy(buf, vaddr + offset, csize);
 
 	set_iounmap_nonlazy();
-	iounmap(vaddr);
+	iounmap((void __iomem *)vaddr);
 	return csize;
 }
+
+/**
+ * copy_oldmem_page - copy one page of memory
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from the old kernel's memory. For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ */
+ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+			 unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
+}
+
+/**
+ * copy_oldmem_page_encrypted - same as copy_oldmem_page() above but ioremap the
+ * memory with the encryption mask set to accomodate kdump on SME-enabled
+ * machines.
+ */
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index cbde728f8ac6..42c32d06f7da 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -24,6 +24,8 @@
 #include <linux/vmalloc.h>
 #include <linux/pagemap.h>
 #include <linux/uaccess.h>
+#include <linux/mem_encrypt.h>
+#include <asm/pgtable.h>
 #include <asm/io.h>
 #include "internal.h"
 
@@ -98,7 +100,8 @@ static int pfn_is_ram(unsigned long pfn)
 
 /* Reads a page from the oldmem device from given offset. */
 static ssize_t read_from_oldmem(char *buf, size_t count,
-				u64 *ppos, int userbuf)
+				u64 *ppos, int userbuf,
+				bool encrypted)
 {
 	unsigned long pfn, offset;
 	size_t nr_bytes;
@@ -120,8 +123,15 @@ static ssize_t read_from_oldmem(char *buf, size_t count,
 		if (pfn_is_ram(pfn) == 0)
 			memset(buf, 0, nr_bytes);
 		else {
-			tmp = copy_oldmem_page(pfn, buf, nr_bytes,
-						offset, userbuf);
+			if (encrypted)
+				tmp = copy_oldmem_page_encrypted(pfn, buf,
+								 nr_bytes,
+								 offset,
+								 userbuf);
+			else
+				tmp = copy_oldmem_page(pfn, buf, nr_bytes,
+						       offset, userbuf);
+
 			if (tmp < 0)
 				return tmp;
 		}
@@ -155,7 +165,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
  */
 ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, false);
 }
 
 /*
@@ -163,7 +173,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
  */
 ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, sme_active());
 }
 
 /*
@@ -173,6 +183,7 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
 				  unsigned long from, unsigned long pfn,
 				  unsigned long size, pgprot_t prot)
 {
+	prot = pgprot_encrypted(prot);
 	return remap_pfn_range(vma, from, pfn, size, prot);
 }
 
@@ -351,7 +362,8 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
 					    m->offset + m->size - *fpos,
 					    buflen);
 			start = m->paddr + *fpos - m->offset;
-			tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
+			tmp = read_from_oldmem(buffer, tsz, &start,
+					       userbuf, sme_active());
 			if (tmp < 0)
 				return tmp;
 			buflen -= tsz;
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..f774c5eb9e3c 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,10 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
+
 void vmcore_cleanup(void);
 
 /* Architecture code defines this if there are other possible ELF

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-06 11:47     ` [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted " tip-bot for Lianbo Jiang
@ 2018-10-07  5:55       ` lijiang
  2018-10-07  8:47         ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: lijiang @ 2018-10-07  5:55 UTC (permalink / raw)
  To: bp, linux-kernel, mingo, tglx, hpa, linux-tip-commits

在 2018年10月06日 19:47, tip-bot for Lianbo Jiang 写道:
> Commit-ID:  992b649a3f013465d8128da02e5449def662a4c3
> Gitweb:     https://git.kernel.org/tip/992b649a3f013465d8128da02e5449def662a4c3
> Author:     Lianbo Jiang <lijiang@redhat.com>
> AuthorDate: Sun, 30 Sep 2018 16:37:41 +0800
> Committer:  Borislav Petkov <bp@suse.de>
> CommitDate: Sat, 6 Oct 2018 12:09:26 +0200
> 
> kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
> 
> In the kdump kernel, the memory of the first kernel needs to be dumped
> into the vmcore file.
> 
> If SME is enabled in the first kernel, the old memory has to be remapped
> with the memory encryption mask in order to access it properly.
> 
> Split copy_oldmem_page() functionality to handle encrypted memory
> properly.
> 
>  [ bp: Heavily massage everything. ]
> 
> Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Cc: kexec@lists.infradead.org
> Cc: tglx@linutronix.de
> Cc: mingo@redhat.com
> Cc: hpa@zytor.com
> Cc: akpm@linux-foundation.org
> Cc: dan.j.williams@intel.com
> Cc: bhelgaas@google.com
> Cc: baiyaowei@cmss.chinamobile.com
> Cc: tiwai@suse.de
> Cc: brijesh.singh@amd.com
> Cc: dyoung@redhat.com
> Cc: bhe@redhat.com
> Cc: jroedel@suse.de
> Link: https://lkml.kernel.org/r/be7b47f9-6be6-e0d1-2c2a-9125bc74b818@redhat.com
> ---
>  arch/x86/kernel/crash_dump_64.c | 60 ++++++++++++++++++++++++++++-------------
>  fs/proc/vmcore.c                | 24 ++++++++++++-----
>  include/linux/crash_dump.h      |  4 +++
>  3 files changed, 63 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
> index 4f2e0778feac..eb8ab3915268 100644
> --- a/arch/x86/kernel/crash_dump_64.c
> +++ b/arch/x86/kernel/crash_dump_64.c
> @@ -11,40 +11,62 @@
>  #include <linux/uaccess.h>
>  #include <linux/io.h>
>  
> -/**
> - * copy_oldmem_page - copy one page from "oldmem"
> - * @pfn: page frame number to be copied
> - * @buf: target memory address for the copy; this can be in kernel address
> - *	space or user address space (see @userbuf)
> - * @csize: number of bytes to copy
> - * @offset: offset in bytes into the page (based on pfn) to begin the copy
> - * @userbuf: if set, @buf is in user address space, use copy_to_user(),
> - *	otherwise @buf is in kernel address space, use memcpy().
> - *
> - * Copy a page from "oldmem". For this page, there is no pte mapped
> - * in the current kernel. We stitch up a pte, similar to kmap_atomic.
> - */
> -ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
> -		size_t csize, unsigned long offset, int userbuf)
> +static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
> +				  unsigned long offset, int userbuf,
> +				  bool encrypted)
>  {
>  	void  *vaddr;
>  
>  	if (!csize)
>  		return 0;
>  
> -	vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
> +	if (encrypted)
> +		vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, PAGE_SIZE);
> +	else
> +		vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
> +
>  	if (!vaddr)
>  		return -ENOMEM;
>  
>  	if (userbuf) {
> -		if (copy_to_user(buf, vaddr + offset, csize)) {
> -			iounmap(vaddr);
> +		if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
> +			iounmap((void __iomem *)vaddr);
>  			return -EFAULT;
>  		}
>  	} else
>  		memcpy(buf, vaddr + offset, csize);
>  
>  	set_iounmap_nonlazy();
> -	iounmap(vaddr);
> +	iounmap((void __iomem *)vaddr);
>  	return csize;
>  }
> +
> +/**
> + * copy_oldmem_page - copy one page of memory
> + * @pfn: page frame number to be copied
> + * @buf: target memory address for the copy; this can be in kernel address
> + *	space or user address space (see @userbuf)
> + * @csize: number of bytes to copy
> + * @offset: offset in bytes into the page (based on pfn) to begin the copy
> + * @userbuf: if set, @buf is in user address space, use copy_to_user(),
> + *	otherwise @buf is in kernel address space, use memcpy().
> + *
> + * Copy a page from the old kernel's memory. For this page, there is no pte
> + * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
> + */
> +ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
> +			 unsigned long offset, int userbuf)
> +{
> +	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
> +}
> +
> +/**
> + * copy_oldmem_page_encrypted - same as copy_oldmem_page() above but ioremap the
> + * memory with the encryption mask set to accomodate kdump on SME-enabled
> + * machines.
> + */
> +ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> +				   unsigned long offset, int userbuf)
> +{
> +	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
> +}
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index cbde728f8ac6..42c32d06f7da 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -24,6 +24,8 @@
>  #include <linux/vmalloc.h>
>  #include <linux/pagemap.h>
>  #include <linux/uaccess.h>
> +#include <linux/mem_encrypt.h>
> +#include <asm/pgtable.h>
>  #include <asm/io.h>
>  #include "internal.h"
>  
> @@ -98,7 +100,8 @@ static int pfn_is_ram(unsigned long pfn)
>  
>  /* Reads a page from the oldmem device from given offset. */
>  static ssize_t read_from_oldmem(char *buf, size_t count,
> -				u64 *ppos, int userbuf)
> +				u64 *ppos, int userbuf,
> +				bool encrypted)
>  {
>  	unsigned long pfn, offset;
>  	size_t nr_bytes;
> @@ -120,8 +123,15 @@ static ssize_t read_from_oldmem(char *buf, size_t count,
>  		if (pfn_is_ram(pfn) == 0)
>  			memset(buf, 0, nr_bytes);
>  		else {
> -			tmp = copy_oldmem_page(pfn, buf, nr_bytes,
> -						offset, userbuf);
> +			if (encrypted)
> +				tmp = copy_oldmem_page_encrypted(pfn, buf,
> +								 nr_bytes,
> +								 offset,
> +								 userbuf);
> +			else
> +				tmp = copy_oldmem_page(pfn, buf, nr_bytes,
> +						       offset, userbuf);
> +
>  			if (tmp < 0)
>  				return tmp;
>  		}
> @@ -155,7 +165,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
>   */
>  ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
>  {
> -	return read_from_oldmem(buf, count, ppos, 0);
> +	return read_from_oldmem(buf, count, ppos, 0, false);
>  }
>  
>  /*
> @@ -163,7 +173,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
>   */
>  ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
>  {
> -	return read_from_oldmem(buf, count, ppos, 0);
> +	return read_from_oldmem(buf, count, ppos, 0, sme_active());
>  }
>  
>  /*
> @@ -173,6 +183,7 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
>  				  unsigned long from, unsigned long pfn,
>  				  unsigned long size, pgprot_t prot)
>  {
> +	prot = pgprot_encrypted(prot);
>  	return remap_pfn_range(vma, from, pfn, size, prot);
>  }
>  
> @@ -351,7 +362,8 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
>  					    m->offset + m->size - *fpos,
>  					    buflen);
>  			start = m->paddr + *fpos - m->offset;
> -			tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
> +			tmp = read_from_oldmem(buffer, tsz, &start,
> +					       userbuf, sme_active());
>  			if (tmp < 0)
>  				return tmp;
>  			buflen -= tsz;
> diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
> index 3e4ba9d753c8..f774c5eb9e3c 100644
> --- a/include/linux/crash_dump.h
> +++ b/include/linux/crash_dump.h
> @@ -26,6 +26,10 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
>  
>  extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
>  						unsigned long, int);
> +extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
> +					  size_t csize, unsigned long offset,
> +					  int userbuf);
> +

Here, it may be have a compile error.
Links: https://lore.kernel.org/patchwork/patch/993337/
kbuild test robot Sept. 29, 2018, 6:25 p.m. UTC | #1

The correct patch is this one, you might refer to "Re: [PATCH v9 4/4] kdump/vmcore:support
encrypted old memory with SME enabled" or this links.
Links: https://lore.kernel.org/patchwork/patch/993538/#1177439
lijiang Sept. 30, 2018, 8:37 a.m. UTC | #2 

diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..84d8ddcb818e 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,19 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+#if defined(CONFIG_AMD_MEM_ENCRYPT) || defined(CONFIG_X86_64)
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
+#else
+static inline
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return 0;
+}
+#endif
+


Thanks.
Lianbo

>  void vmcore_cleanup(void);
>  
>  /* Architecture code defines this if there are other possible ELF
> 

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)
  2018-10-06  9:56             ` Borislav Petkov
@ 2018-10-07  6:09               ` lijiang
  0 siblings, 0 replies; 30+ messages in thread
From: lijiang @ 2018-10-07  6:09 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, brijesh.singh,
	dyoung, bhe, jroedel

在 2018年10月06日 17:56, Borislav Petkov 写道:
> On Fri, Oct 05, 2018 at 01:52:26PM +0800, lijiang wrote:
>>      b. add the parameter "mem_encrypt=on" for kernel command-line to grub.cfg, if
>>         this machine has SME feature. And also add crashkernel=xx, which will reserve
>>         memory for kdump.
> 
> Ok, I'm doing the simpler crashkernel= cmdline:
> 
> crashkernel=256M
> 
> That says:
> 
> [    0.011918] Reserving 256MB of memory at 640MB for crashkernel (System RAM: 262030MB)
> 
>> Step 5: reboot, and then load the crash kernel image and kdump initramfs.
>>
>>      a: When SME is enabled, i use this command to load them:
>>
>>         #kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd=/boot/initramfs-4.19.0-rc6+kdump.img --command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root ro rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap mem_encrypt=on console=ttyS0,115200n81 LANG=en_US.UTF-8 earlyprintk=serial debug irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never disable_cpu_apicid=0"
> 
> Ok, did that, my cmdline is:
> 
> ~/bpetkov/src/kexec-tools/build/sbin/kexec -p /boot/vmlinuz-4.19.0-rc6+ --initrd=/boot/initrd-4.19.0-rc6+ --command-line="root=/dev/mapper/ubuntu--010236012132--vg-leap15 rd.lvm.lv=ubuntu--010236012132--vg-leap15/root rd.lvm.lv=ubuntu--010236012132--vg-leap15/swap splash=silent showopts console=ttyS5,115200 console=tty0 debug ignore_loglevel log_buf_len=16M nr_cpus=1 irqpoll maxcpus=1 reset_devices vga=normal mem_encrypt=on LANG=en_US.UTF-8 earlyprintk=serial cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never disable_cpu_apicid=0"
> 
> Verified it loaded ok:
> 
> $ grep . /sys/kernel/kexec_*
> /sys/kernel/kexec_crash_loaded:1
> /sys/kernel/kexec_crash_size:268435456
> /sys/kernel/kexec_loaded:0
> 
>> Step 6: trigger panic by sysrq
>>         #echo c > /proc/sysrq-trigger
> 
> Did that and I got into the kdump kernel with SME. So I'd guess your kdump
> kernel command line was needed - I was missing a bunch of switches and
> remote-debugging a box kexecing is not fun.
> 
> So thanks a lot for the detailed steps, I'm putting them to my notes.
> 

It's my pleasure.
Also thanks for your patience and help.

Regards,
Lianbo

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-07  5:55       ` lijiang
@ 2018-10-07  8:47         ` Borislav Petkov
  2018-10-08  3:30           ` lijiang
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-07  8:47 UTC (permalink / raw)
  To: lijiang; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

On Sun, Oct 07, 2018 at 01:55:33PM +0800, lijiang wrote:
> Here, it may be have a compile error.

Are you sure? The configs I tried worked fine but I'm open to being
shown configs which fail the build.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-07  8:47         ` Borislav Petkov
@ 2018-10-08  3:30           ` lijiang
  2018-10-08  5:37             ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: lijiang @ 2018-10-08  3:30 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

在 2018年10月07日 16:47, Borislav Petkov 写道:
> On Sun, Oct 07, 2018 at 01:55:33PM +0800, lijiang wrote:
>> Here, it may be have a compile error.
> 
> Are you sure? The configs I tried worked fine but I'm open to being
> shown configs which fail the build.
> 

Yes. As previously mentioned, the correct patch is this one:

diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..84d8ddcb818e 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,19 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+#if defined(CONFIG_AMD_MEM_ENCRYPT) || defined(CONFIG_X86_64)
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
+#else
+static inline
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return 0;
+}
+#endif
+

I used the patch above to test six compile cases. All of them passed, there
was no compile error.

I'm not sure whether the kernel options or compile environment are different.
Would you like to share your kernel options(.config)? I will use your kernel
options to compile, and check whether i might also reproduce your compile error. 

1. x86_64 (CONFIG_X86_64=y)
   a.     
      CONFIG_AMD_MEM_ENCRYPT=y
      CONFIG_CRASH_DUMP=y

   b.
      # CONFIG_AMD_MEM_ENCRYPT is not set
      # CONFIG_CRASH_DUMP is not set

   c. 
      # CONFIG_AMD_MEM_ENCRYPT is not set
      CONFIG_CRASH_DUMP=y

   d. 
      CONFIG_AMD_MEM_ENCRYPT=y
      # CONFIG_CRASH_DUMP is not set

Compile command:
#make clean
#make ARCH=x86_64 -j32

2. i386 (CONFIG_X86_32=y)
   a. 
   CONFIG_CRASH_DUMP=y

   b.
   # CONFIG_CRASH_DUMP is not set

Compile command:
#make clean
#make ARCH=i386 -j32

Thanks.
Lianbo

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08  3:30           ` lijiang
@ 2018-10-08  5:37             ` Borislav Petkov
  2018-10-08  7:11               ` lijiang
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-08  5:37 UTC (permalink / raw)
  To: lijiang; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

On Mon, Oct 08, 2018 at 11:30:56AM +0800, lijiang wrote:
> Yes. As previously mentioned, the correct patch is this one:

No, that chunk is not needed and I removed it. But I'd leave it as
an exercise to you to figure out why... or to prove me wrong with a
.config.

:-)

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08  5:37             ` Borislav Petkov
@ 2018-10-08  7:11               ` lijiang
  2018-10-08  8:00                 ` Borislav Petkov
  2018-10-09 10:30                 ` [tip:x86/mm] proc/vmcore: Fix i386 build error of missing copy_oldmem_page_encrypted() tip-bot for Borislav Petkov
  0 siblings, 2 replies; 30+ messages in thread
From: lijiang @ 2018-10-08  7:11 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

[-- Attachment #1: Type: text/plain, Size: 749 bytes --]

在 2018年10月08日 13:37, Borislav Petkov 写道:
> On Mon, Oct 08, 2018 at 11:30:56AM +0800, lijiang wrote:
>> Yes. As previously mentioned, the correct patch is this one:
> 
> No, that chunk is not needed and I removed it. But I'd leave it as
> an exercise to you to figure out why... or to prove me wrong with a
> .config.
> 
> :-)
> 

I used this ".config" to compile kernel in the attachment, and got a compile error.
Would you like to have a try?

[root@hp-dl385g10-03 linux]# make ARCH=i386 -j32
  ......
  LD      vmlinux.o
  MODPOST vmlinux.o
fs/proc/vmcore.o:In function ‘read_from_oldmem’:
/home/linux/fs/proc/vmcore.c:127:undefined reference to ‘copy_oldmem_page_encrypted’
make: *** [vmlinux] error 1


Regards,
Lianbo

[-- Attachment #2: i386_config.gz --]
[-- Type: application/gzip, Size: 24793 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08  7:11               ` lijiang
@ 2018-10-08  8:00                 ` Borislav Petkov
  2018-10-08  8:47                   ` lijiang
  2018-10-09 10:30                 ` [tip:x86/mm] proc/vmcore: Fix i386 build error of missing copy_oldmem_page_encrypted() tip-bot for Borislav Petkov
  1 sibling, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-08  8:00 UTC (permalink / raw)
  To: lijiang; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

On Mon, Oct 08, 2018 at 03:11:56PM +0800, lijiang wrote:
> I used this ".config" to compile kernel in the attachment, and got a compile error.
> Would you like to have a try?
> 
> [root@hp-dl385g10-03 linux]# make ARCH=i386 -j32
>   ......
>   LD      vmlinux.o
>   MODPOST vmlinux.o
> fs/proc/vmcore.o:In function ‘read_from_oldmem’:
> /home/linux/fs/proc/vmcore.c:127:undefined reference to ‘copy_oldmem_page_encrypted’
> make: *** [vmlinux] error 1

Thanks, that triggered here. Ok, I guess something like this, to avoid
the ugly ifdeffery:

---
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 33ee47670b99..8696800f2eea 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -80,6 +80,16 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
 	return csize;
 }
 
+/*
+ * 32-bit parrot version to avoid build errors.
+ */
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	WARN_ON_ONCE(1);
+	return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
+}
+
 static int __init kdump_buf_page_init(void)
 {
 	int ret = 0;



-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08  8:00                 ` Borislav Petkov
@ 2018-10-08  8:47                   ` lijiang
  2018-10-08  8:59                     ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: lijiang @ 2018-10-08  8:47 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

在 2018年10月08日 16:00, Borislav Petkov 写道:
> On Mon, Oct 08, 2018 at 03:11:56PM +0800, lijiang wrote:
>> I used this ".config" to compile kernel in the attachment, and got a compile error.
>> Would you like to have a try?
>>
>> [root@hp-dl385g10-03 linux]# make ARCH=i386 -j32
>>   ......
>>   LD      vmlinux.o
>>   MODPOST vmlinux.o
>> fs/proc/vmcore.o:In function ‘read_from_oldmem’:
>> /home/linux/fs/proc/vmcore.c:127:undefined reference to ‘copy_oldmem_page_encrypted’
>> make: *** [vmlinux] error 1
> 
> Thanks, that triggered here. Ok, I guess something like this, to avoid
> the ugly ifdeffery:
> 
> ---
> diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
> index 33ee47670b99..8696800f2eea 100644
> --- a/arch/x86/kernel/crash_dump_32.c
> +++ b/arch/x86/kernel/crash_dump_32.c
> @@ -80,6 +80,16 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
>  	return csize;
>  }
>  
> +/*
> + * 32-bit parrot version to avoid build errors.
> + */
> +ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> +				   unsigned long offset, int userbuf)
> +{
> +	WARN_ON_ONCE(1);
> +	return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> +}
> +

It looks like a good way to avoid the 'ifdefined', and it's also good enough for i386.

But for other architectures, such as POWERPC/ARM..., we will also have to add the same 
function for every architecture. Otherwise, i guess that they also have a same compile
error on other architectures.

Sometimes, it's hard to make a choice.

Regards,
Lianbo
>  static int __init kdump_buf_page_init(void)
>  {
>  	int ret = 0;
> 
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08  8:47                   ` lijiang
@ 2018-10-08  8:59                     ` Borislav Petkov
  2018-10-08 13:43                       ` Borislav Petkov
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-08  8:59 UTC (permalink / raw)
  To: lijiang; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

On Mon, Oct 08, 2018 at 04:47:34PM +0800, lijiang wrote:
> It looks like a good way to avoid the 'ifdefined', and it's also good enough for i386.
> 
> But for other architectures, such as POWERPC/ARM..., we will also have to add the same 
> function for every architecture. Otherwise, i guess that they also have a same compile
> error on other architectures.

Yap, just realized that and looking at the rest of fs/proc/vmcore.c -
such functions are defined with the __weak attribute. Lemme see if that
works better.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08  8:59                     ` Borislav Petkov
@ 2018-10-08 13:43                       ` Borislav Petkov
  2018-10-09  2:54                         ` lijiang
  0 siblings, 1 reply; 30+ messages in thread
From: Borislav Petkov @ 2018-10-08 13:43 UTC (permalink / raw)
  To: lijiang; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

On Mon, Oct 08, 2018 at 10:59:09AM +0200, Borislav Petkov wrote:
> On Mon, Oct 08, 2018 at 04:47:34PM +0800, lijiang wrote:
> > It looks like a good way to avoid the 'ifdefined', and it's also good enough for i386.
> > 
> > But for other architectures, such as POWERPC/ARM..., we will also have to add the same 
> > function for every architecture. Otherwise, i guess that they also have a same compile
> > error on other architectures.
> 
> Yap, just realized that and looking at the rest of fs/proc/vmcore.c -
> such functions are defined with the __weak attribute. Lemme see if that
> works better.

Seems so. I'll hammer on it more today:

---
 fs/proc/vmcore.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 42c32d06f7da..91ae16fbd7d5 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -187,6 +187,16 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
 	return remap_pfn_range(vma, from, pfn, size, prot);
 }
 
+/*
+ * Architectures which support memory encryption override this.
+ */
+ssize_t __weak
+copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+			   unsigned long offset, int userbuf)
+{
+	return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
+}
+
 /*
  * Copy to either kernel or user space
  */
-- 
2.19.0.271.gfe8321ec057f

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted memory with SME enabled
  2018-10-08 13:43                       ` Borislav Petkov
@ 2018-10-09  2:54                         ` lijiang
  0 siblings, 0 replies; 30+ messages in thread
From: lijiang @ 2018-10-09  2:54 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: linux-kernel, mingo, tglx, hpa, linux-tip-commits

在 2018年10月08日 21:43, Borislav Petkov 写道:
> On Mon, Oct 08, 2018 at 10:59:09AM +0200, Borislav Petkov wrote:
>> On Mon, Oct 08, 2018 at 04:47:34PM +0800, lijiang wrote:
>>> It looks like a good way to avoid the 'ifdefined', and it's also good enough for i386.
>>>
>>> But for other architectures, such as POWERPC/ARM..., we will also have to add the same 
>>> function for every architecture. Otherwise, i guess that they also have a same compile
>>> error on other architectures.
>>
>> Yap, just realized that and looking at the rest of fs/proc/vmcore.c -
>> such functions are defined with the __weak attribute. Lemme see if that
>> works better.
> 
> Seems so. I'll hammer on it more today:
> 
Great! Thank you, Borislav.

Regards,
Lianbo
> ---
>  fs/proc/vmcore.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 42c32d06f7da..91ae16fbd7d5 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -187,6 +187,16 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
>  	return remap_pfn_range(vma, from, pfn, size, prot);
>  }
>  
> +/*
> + * Architectures which support memory encryption override this.
> + */
> +ssize_t __weak
> +copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> +			   unsigned long offset, int userbuf)
> +{
> +	return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> +}
> +
>  /*
>   * Copy to either kernel or user space
>   */
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [tip:x86/mm] proc/vmcore: Fix i386 build error of missing copy_oldmem_page_encrypted()
  2018-10-08  7:11               ` lijiang
  2018-10-08  8:00                 ` Borislav Petkov
@ 2018-10-09 10:30                 ` tip-bot for Borislav Petkov
  1 sibling, 0 replies; 30+ messages in thread
From: tip-bot for Borislav Petkov @ 2018-10-09 10:30 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: tglx, hpa, bp, mingo, lijiang, linux-kernel

Commit-ID:  cf089611f4c446285046fcd426d90c18f37d2905
Gitweb:     https://git.kernel.org/tip/cf089611f4c446285046fcd426d90c18f37d2905
Author:     Borislav Petkov <bp@suse.de>
AuthorDate: Mon, 8 Oct 2018 10:05:20 +0200
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Tue, 9 Oct 2018 11:57:28 +0200

proc/vmcore: Fix i386 build error of missing copy_oldmem_page_encrypted()

Lianbo reported a build error with a particular 32-bit config, see Link
below for details.

Provide a weak copy_oldmem_page_encrypted() function which architectures
can override, in the same manner other functionality in that file is
supplied.

Reported-by: Lianbo Jiang <lijiang@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
CC: x86@kernel.org
Link: http://lkml.kernel.org/r/710b9d95-2f70-eadf-c4a1-c3dc80ee4ebb@redhat.com
---
 fs/proc/vmcore.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 42c32d06f7da..91ae16fbd7d5 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -187,6 +187,16 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
 	return remap_pfn_range(vma, from, pfn, size, prot);
 }
 
+/*
+ * Architectures which support memory encryption override this.
+ */
+ssize_t __weak
+copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+			   unsigned long offset, int userbuf)
+{
+	return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
+}
+
 /*
  * Copy to either kernel or user space
  */

^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2018-10-09 10:31 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-30  3:10 [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
2018-09-30  3:10 ` [PATCH v8 RESEND 1/4] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
2018-09-30  3:10 ` [PATCH v8 RESEND 2/4] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
2018-10-06 11:46   ` [tip:x86/mm] kexec: Allocate decrypted control pages for kdump if " tip-bot for Lianbo Jiang
2018-09-30  3:10 ` [PATCH v8 RESEND 3/4] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
2018-10-06 11:47   ` [tip:x86/mm] iommu/amd: Remap the IOMMU device table " tip-bot for Lianbo Jiang
2018-09-30  3:10 ` [PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
2018-09-30  4:22   ` kbuild test robot
2018-09-30  8:37   ` [PATCH v9 " lijiang
2018-10-01 20:22     ` Borislav Petkov
2018-10-06 11:47     ` [tip:x86/mm] kdump, proc/vmcore: Enable kdumping encrypted " tip-bot for Lianbo Jiang
2018-10-07  5:55       ` lijiang
2018-10-07  8:47         ` Borislav Petkov
2018-10-08  3:30           ` lijiang
2018-10-08  5:37             ` Borislav Petkov
2018-10-08  7:11               ` lijiang
2018-10-08  8:00                 ` Borislav Petkov
2018-10-08  8:47                   ` lijiang
2018-10-08  8:59                     ` Borislav Petkov
2018-10-08 13:43                       ` Borislav Petkov
2018-10-09  2:54                         ` lijiang
2018-10-09 10:30                 ` [tip:x86/mm] proc/vmcore: Fix i386 build error of missing copy_oldmem_page_encrypted() tip-bot for Borislav Petkov
2018-10-02 11:40 ` [PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME) Borislav Petkov
2018-10-03  3:57   ` lijiang
2018-10-03 11:34     ` Borislav Petkov
2018-10-04  9:33       ` lijiang
2018-10-04 19:02         ` Borislav Petkov
2018-10-05  5:52           ` lijiang
2018-10-06  9:56             ` Borislav Petkov
2018-10-07  6:09               ` lijiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).