linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME)
@ 2018-09-29 15:43 Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 1/4 v8] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Lianbo Jiang @ 2018-09-29 15:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

When SME is enabled on AMD machine, it also needs to support kdump. Because
the memory is encrypted in the first kernel, the old memory will be remapped
to kdump kernel for dumping data, and SME is also enabled in kdump kernel,
otherwise the old memory can not be decrypted.

For the kdump, it is necessary to distinguish whether the memory is encrypted.
Furthermore, that should also know which part of the memory is encrypted or
decrypted. It will appropriately remap the memory according to the specific
situation in order to tell cpu how to access the memory.

As we know, a page of memory that is marked as encrypted, which will be
automatically decrypted when read from DRAM, and will also be automatically
encrypted when written to DRAM. If the old memory is encrypted, it has to
remap the old memory with the memory encryption mask, which will automatically
decrypt the old memory when read from DRAM.

For kdump(SME), there are two cases that doesn't support:

 ----------------------------------------------
| first-kernel | second-kernel | kdump support |
|      (mem_encrypt=on|off)    |   (yes|no)    |
|--------------+---------------+---------------|
|     on       |     on        |     yes       |
|     off      |     off       |     yes       |
|     on       |     off       |     no        |
|     off      |     on        |     no        |
|______________|_______________|_______________|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
It is unnecessary to support in this case, because the old memory is
dencrypted, the old memory can be dumped as usual, that doesn't need to
enable SME in kdump kernel. Another, If the scenario must be supported, it
will increase the complexity of the code, that will have to consider how to
pass the SME flag from the first kernel to the kdump kernel, in order to let
kdump kernel know that whether the old memory is encrypted.

There are two methods to pass the SME flag to the kdump kernel. The first
method is to modify the assembly code, which includes some common code and
the path is too long. The second method is to use kexec tool, which could
require the SME flag to be exported in the first kernel by "proc" or "sysfs",
kexec tools will read the SME flag from "proc" or "sysfs" when we use kexec
tools to load image, subsequently the SME flag will be saved in boot_params,
that can properly remap the old memory according to the previously saved SME
flag. But it is too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Test tools:
makedumpfile[v1.6.3]: https://github.com/LianboJ/makedumpfile
commit <e1de103eca8f> "A draft for kdump vmcore about AMD SME"
Note: This patch can only dump vmcore in the case of SME enabled.

crash-7.2.3: https://github.com/crash-utility/crash.git
commit <001f77a05585> "Fix for Linux 4.19-rc1 and later kernels that contain kernel commit <7290d5809571>"

kexec-tools-2.0.17: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
commit <b9de21ef51a7> "kexec: fix for "Unhandled rela relocation: R_X86_64_PLT32" error"

Note:
Before you load the kernel and initramfs for kdump, this patch(
http://lists.infradead.org/pipermail/kexec/2018-September/021460.html) must be merged
to kexec-tools, and then the kdump kernel will work well. Because there is a patch
which is removed based on v6(x86/ioremap: strengthen the logic in
early_memremap_pgprot_adjust() to adjust encryption mask).

Test environment:
HP ProLiant DL385Gen10 AMD EPYC 7251
8-Core Processor
32768 MB memory
600 GB disk space

Linux 4.19-rc5:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
commit <6bf4ca7fbc85> "Linux 4.19-rc5"

Reference:
AMD64 Architecture Programmer's Manual
https://support.amd.com/TechDocs/24593.pdf

Changes since v6:
1. There is a patch which is removed based on v6.
(x86/ioremap: strengthen the logic in early_memremap_pgprot_adjust() to adjust encryption mask)
Dave Young suggests that this patch can be removed and fix the kexec-tools.
Reference: http://lists.infradead.org/pipermail/kexec/2018-September/021460.html)
2. Update the patch log.

Changes since v7:
1. Improve patch log for patch 1/4(Suggested by Baoquan He)
2. Add Reviewed-by for all patches(Tom Lendacky <thomas.lendacky@amd.com>)
3. Add Acked-by for patch 3/4(Joerg Roedel <jroedel@suse.de>)
4. Remove header file(linux/crash_dump.h) from
arch/x86/mm/ioremap.c(Suggested by Borislav)
5. Modify comment and patch log for patch 2/4(Suggested by Borislav)
6. Delete a file arch/x86/kernel/crash_dump_encrypt.c and rewrite some
functions(Suggested by Borislav)
7. Modify all code style issue(Suggested by Borislav)

Some known issues:
1. about SME
Upstream kernel will hang on HP machine(DL385Gen10 AMD EPYC 7251) when
we execute the kexec command as follow:

# kexec -l /boot/vmlinuz-4.19.0-rc5+ --initrd=/boot/initramfs-4.19.0-rc5+.img --command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root ro mem_encrypt=on rd.lvm.lv=rhel_hp-dl385g10-03/root rd.lvm.lv=rhel_hp-dl385g10-03/swap console=ttyS0,115200n81 LANG=en_US.UTF-8 earlyprintk=serial debug nokaslr"
# kexec -e (or reboot)

But this issue can not be reproduced on speedway machine, and this issue
is irrelevant to my posted patches.

The kernel log:
[ 1248.932239] kexec_core: Starting new kernel
early console in extract_kernel
input_data: 0x000000087e91c3b4
input_len: 0x000000000067fcbd
output: 0x000000087d400000
output_len: 0x0000000001b6fa90
kernel_total_size: 0x0000000001a9d000
trampoline_32bit: 0x0000000000099000

Decompressing Linux...
Parsing ELF...        [---Here the system will hang]

Lianbo Jiang (4):
  x86/ioremap: add a function ioremap_encrypted() to remap kdump old
    memory
  kexec: allocate decrypted control pages for kdump in case SME is
    enabled
  iommu/amd: Remap the device table of IOMMU with the memory encryption
    mask for kdump
  kdump/vmcore: support encrypted old memory with SME enabled

 arch/x86/include/asm/io.h       |  2 +
 arch/x86/kernel/crash_dump_64.c | 65 ++++++++++++++++++++++++++++-----
 arch/x86/mm/ioremap.c           | 24 ++++++++----
 drivers/iommu/amd_iommu_init.c  | 14 ++++++-
 fs/proc/vmcore.c                | 24 +++++++++---
 include/linux/crash_dump.h      |  3 ++
 kernel/kexec_core.c             | 14 +++++++
 7 files changed, 121 insertions(+), 25 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4 v8] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory
  2018-09-29 15:43 [PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
@ 2018-09-29 15:43 ` Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 2/4 v8] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Lianbo Jiang @ 2018-09-29 15:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

When SME is enabled on AMD machine, the memory is encrypted in the first
kernel. In this case, SME also needs to be enabled in kdump kernel, and
the old memory has to be remapped with the memory encryption mask.

Here we only talk about the case that SME is active in the first kernel,
and only care it's active too in kdump kernel. there are four cases that
need considered.

a. dump vmcore
   It is encrypted in the first kernel, and needs be read out in kdump
   kernel.

b. crash notes
   When dumping vmcore, the people usually need to read the useful
   information from notes, and the notes is also encrypted.

c. iommu device table
   It is allocated by kernel, need fill its pointer into mmio of amd iommu.
   It's encrypted in the first kernel, need read the old content to analyze
   and get useful information.

d. mmio of amd iommu
   Register reported by amd firmware, it's not RAM, which won't be
   encrypted in both the first kernel and kdump kernel.

To achieve the goal, the solution is:
1. add a new bool parameter "encrypted" to __ioremap_caller()
   It is a low level function, and check the newly added parameter, if it's
   true and in kdump kernel, will remap the memory with sme mask.

2. add a new function ioremap_encrypted() to explicitly passed in a "true"
   value for "encrypted".
   For above a, b, c, kdump kernel will call ioremap_encrypted();

3. adjust all existed ioremap wrapper functions, passed in "false" for
   encrypted to make them as before.

   ioremap_encrypted()\
   ioremap_cache()     |
   ioremap_prot()      |
   ioremap_wt()        |->__ioremap_caller()
   ioremap_wc()        |
   ioremap_uc()        |
   ioremap_nocache()  /

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Remove a redundant header file "linux/crash_dump.h".(Suggested by
Borislav)
2. Modify code style issue.(Suggested by Borislav)
3. Improve patch log.(Suggested by Baoquan)

 arch/x86/include/asm/io.h |  2 ++
 arch/x86/mm/ioremap.c     | 24 ++++++++++++++++--------
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 6de64840dd22..b7b0bf36c400 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -192,6 +192,8 @@ extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
 #define ioremap_cache ioremap_cache
 extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, unsigned long prot_val);
 #define ioremap_prot ioremap_prot
+extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size);
+#define ioremap_encrypted ioremap_encrypted
 
 /**
  * ioremap     -   map bus memory into CPU space
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c63a545ec199..24e0920a9b25 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -131,7 +131,8 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-		unsigned long size, enum page_cache_mode pcm, void *caller)
+		unsigned long size, enum page_cache_mode pcm,
+		void *caller, bool encrypted)
 {
 	unsigned long offset, vaddr;
 	resource_size_t last_addr;
@@ -199,7 +200,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	 * resulting mapping.
 	 */
 	prot = PAGE_KERNEL_IO;
-	if (sev_active() && mem_flags.desc_other)
+	if ((sev_active() && mem_flags.desc_other) || encrypted)
 		prot = pgprot_encrypted(prot);
 
 	switch (pcm) {
@@ -291,7 +292,7 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, unsigned long size)
 	enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
 	return __ioremap_caller(phys_addr, size, pcm,
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_nocache);
 
@@ -324,7 +325,7 @@ void __iomem *ioremap_uc(resource_size_t phys_addr, unsigned long size)
 	enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC;
 
 	return __ioremap_caller(phys_addr, size, pcm,
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL_GPL(ioremap_uc);
 
@@ -341,7 +342,7 @@ EXPORT_SYMBOL_GPL(ioremap_uc);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
 	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
-					__builtin_return_address(0));
+					__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wc);
 
@@ -358,14 +359,21 @@ EXPORT_SYMBOL(ioremap_wc);
 void __iomem *ioremap_wt(resource_size_t phys_addr, unsigned long size)
 {
 	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WT,
-					__builtin_return_address(0));
+					__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wt);
 
+void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size)
+{
+	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
+				__builtin_return_address(0), true);
+}
+EXPORT_SYMBOL(ioremap_encrypted);
+
 void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
 {
 	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_cache);
 
@@ -374,7 +382,7 @@ void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
 {
 	return __ioremap_caller(phys_addr, size,
 				pgprot2cachemode(__pgprot(prot_val)),
-				__builtin_return_address(0));
+				__builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_prot);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4 v8] kexec: allocate decrypted control pages for kdump in case SME is enabled
  2018-09-29 15:43 [PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 1/4 v8] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
@ 2018-09-29 15:43 ` Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 3/4 v8] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
  3 siblings, 0 replies; 7+ messages in thread
From: Lianbo Jiang @ 2018-09-29 15:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

When SME is enabled in the first kernel, it needs to allocate decrypted
pages for kdump, because when it boots to the kdump kernel, these pages
won't be accessed encrypted at the initial stage, in order to boot the
kdump kernel in the same manner as originally booted.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Modify comment in the code.(Suggested by Borislav)
2. Improve patch log.(Suggested by Borislav)

 kernel/kexec_core.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 23a83a4da38a..6353daaee7f1 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -471,6 +471,18 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
 		}
 	}
 
+	if (pages) {
+		/*
+		 * For kdump, it needs to ensure that these pages are
+		 * decrypted if SME is enabled.
+		 * By the way, it is unnecessary to call the arch_
+		 * kexec_pre_free_pages(), because these pages are
+		 * reserved memory and once the crash kernel is done,
+		 * it will always remain in these memory until reboot
+		 * or unloading.
+		 */
+		arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+	}
 	return pages;
 }
 
@@ -867,6 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			result  = -ENOMEM;
 			goto out;
 		}
+		arch_kexec_post_alloc_pages(page_address(page), 1, 0);
 		ptr = kmap(page);
 		ptr += maddr & ~PAGE_MASK;
 		mchunk = min_t(size_t, mbytes,
@@ -884,6 +897,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			result = copy_from_user(ptr, buf, uchunk);
 		kexec_flush_icache_page(page);
 		kunmap(page);
+		arch_kexec_pre_free_pages(page_address(page), 1);
 		if (result) {
 			result = -EFAULT;
 			goto out;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4 v8] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump
  2018-09-29 15:43 [PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 1/4 v8] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 2/4 v8] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
@ 2018-09-29 15:43 ` Lianbo Jiang
  2018-09-29 15:43 ` [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
  3 siblings, 0 replies; 7+ messages in thread
From: Lianbo Jiang @ 2018-09-29 15:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

In kdump kernel, it will copy the device table of IOMMU from the old device
table, which is encrypted when SME is enabled in the first kernel. So the
old device table has to be remapped with the memory encryption mask.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
---
 drivers/iommu/amd_iommu_init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 84b3e4445d46..3931c7de7c69 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -902,12 +902,22 @@ static bool copy_device_table(void)
 		}
 	}
 
-	old_devtb_phys = entry & PAGE_MASK;
+	/*
+	 * When SME is enabled in the first kernel, the entry includes the
+	 * memory encryption mask(sme_me_mask), we must remove the memory
+	 * encryption mask to obtain the true physical address in kdump kernel.
+	 */
+	old_devtb_phys = __sme_clr(entry) & PAGE_MASK;
+
 	if (old_devtb_phys >= 0x100000000ULL) {
 		pr_err("The address of old device table is above 4G, not trustworthy!\n");
 		return false;
 	}
-	old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+	old_devtb = (sme_active() && is_kdump_kernel())
+		    ? (__force void *)ioremap_encrypted(old_devtb_phys,
+							dev_table_size)
+		    : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+
 	if (!old_devtb)
 		return false;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-29 15:43 [PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
                   ` (2 preceding siblings ...)
  2018-09-29 15:43 ` [PATCH 3/4 v8] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
@ 2018-09-29 15:43 ` Lianbo Jiang
  2018-09-29 18:25   ` kbuild test robot
  3 siblings, 1 reply; 7+ messages in thread
From: Lianbo Jiang @ 2018-09-29 15:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: kexec, tglx, mingo, hpa, x86, akpm, dan.j.williams,
	thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp, brijesh.singh,
	dyoung, bhe, jroedel

In kdump kernel, the old memory needs to be dumped into vmcore file.
If SME is enabled in the first kernel, the old memory has to be
remapped with the memory encryption mask, which will be automatically
decrypted when read from DRAM.

For SME kdump, there are two cases that doesn't support:

 ----------------------------------------------
| first-kernel | second-kernel | kdump support |
|      (mem_encrypt=on|off)    |   (yes|no)    |
|--------------+---------------+---------------|
|     on       |     on        |     yes       |
|     off      |     off       |     yes       |
|     on       |     off       |     no        |
|     off      |     on        |     no        |
|______________|_______________|_______________|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
On the one hand, the old memory is decrypted, the old memory can be dumped
as usual, so SME doesn't need to be enabled in kdump kernel; On the other
hand, it will increase the complexity of the code, because that will have
to consider how to pass the SME flag from the first kernel to the kdump
kernel, it is really too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
Changes since v7:
1. Delete a file arch/x86/kernel/crash_dump_encrypt.c, and move the
copy_oldmem_page_encrypted() to arch/x86/kernel/crash_dump_64.c, also
rewrite some functions.(Suggested by Borislav)
2. Modify all code style issue.(Suggested by Borislav)
3. Remove a reduntant header file.(Suggested by Borislav)
4. Improve patch log.(Suggested by Borislav)

 arch/x86/kernel/crash_dump_64.c | 65 ++++++++++++++++++++++++++++-----
 fs/proc/vmcore.c                | 24 +++++++++---
 include/linux/crash_dump.h      |  3 ++
 3 files changed, 77 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 4f2e0778feac..6adbde592c44 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,7 +12,7 @@
 #include <linux/io.h>
 
 /**
- * copy_oldmem_page - copy one page from "oldmem"
+ * __copy_oldmem_page - copy one page from "old memory encrypted or decrypted"
  * @pfn: page frame number to be copied
  * @buf: target memory address for the copy; this can be in kernel address
  *	space or user address space (see @userbuf)
@@ -20,31 +20,78 @@
  * @offset: offset in bytes into the page (based on pfn) to begin the copy
  * @userbuf: if set, @buf is in user address space, use copy_to_user(),
  *	otherwise @buf is in kernel address space, use memcpy().
+ * @encrypted: if true, the old memory is encrypted.
+ *             if false, the old memory is decrypted.
  *
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "old memory encrypted or decrypted". For this page, there
+ * is no pte mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
  */
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
-		size_t csize, unsigned long offset, int userbuf)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+				  unsigned long offset, int userbuf,
+				  bool encrypted)
 {
 	void  *vaddr;
 
 	if (!csize)
 		return 0;
 
-	vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+	if (encrypted)
+		vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, PAGE_SIZE);
+	else
+		vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+
 	if (!vaddr)
 		return -ENOMEM;
 
 	if (userbuf) {
-		if (copy_to_user(buf, vaddr + offset, csize)) {
-			iounmap(vaddr);
+		if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+			iounmap((void __iomem *)vaddr);
 			return -EFAULT;
 		}
 	} else
 		memcpy(buf, vaddr + offset, csize);
 
 	set_iounmap_nonlazy();
-	iounmap(vaddr);
+	iounmap((void __iomem *)vaddr);
 	return csize;
 }
+
+/**
+ * copy_oldmem_page - copy one page from "old memory decrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "old memory decrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ */
+ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+			 unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
+}
+
+/**
+ * copy_oldmem_page_encrypted - copy one page from "old memory encrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ *	space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ *	otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "old memory encrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
+ */
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
+				   unsigned long offset, int userbuf)
+{
+	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index cbde728f8ac6..42c32d06f7da 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -24,6 +24,8 @@
 #include <linux/vmalloc.h>
 #include <linux/pagemap.h>
 #include <linux/uaccess.h>
+#include <linux/mem_encrypt.h>
+#include <asm/pgtable.h>
 #include <asm/io.h>
 #include "internal.h"
 
@@ -98,7 +100,8 @@ static int pfn_is_ram(unsigned long pfn)
 
 /* Reads a page from the oldmem device from given offset. */
 static ssize_t read_from_oldmem(char *buf, size_t count,
-				u64 *ppos, int userbuf)
+				u64 *ppos, int userbuf,
+				bool encrypted)
 {
 	unsigned long pfn, offset;
 	size_t nr_bytes;
@@ -120,8 +123,15 @@ static ssize_t read_from_oldmem(char *buf, size_t count,
 		if (pfn_is_ram(pfn) == 0)
 			memset(buf, 0, nr_bytes);
 		else {
-			tmp = copy_oldmem_page(pfn, buf, nr_bytes,
-						offset, userbuf);
+			if (encrypted)
+				tmp = copy_oldmem_page_encrypted(pfn, buf,
+								 nr_bytes,
+								 offset,
+								 userbuf);
+			else
+				tmp = copy_oldmem_page(pfn, buf, nr_bytes,
+						       offset, userbuf);
+
 			if (tmp < 0)
 				return tmp;
 		}
@@ -155,7 +165,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
  */
 ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, false);
 }
 
 /*
@@ -163,7 +173,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
  */
 ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0);
+	return read_from_oldmem(buf, count, ppos, 0, sme_active());
 }
 
 /*
@@ -173,6 +183,7 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
 				  unsigned long from, unsigned long pfn,
 				  unsigned long size, pgprot_t prot)
 {
+	prot = pgprot_encrypted(prot);
 	return remap_pfn_range(vma, from, pfn, size, prot);
 }
 
@@ -351,7 +362,8 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
 					    m->offset + m->size - *fpos,
 					    buflen);
 			start = m->paddr + *fpos - m->offset;
-			tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
+			tmp = read_from_oldmem(buffer, tsz, &start,
+					       userbuf, sme_active());
 			if (tmp < 0)
 				return tmp;
 			buflen -= tsz;
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 3e4ba9d753c8..cf382594568f 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,9 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
 
 extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
 						unsigned long, int);
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+					  size_t csize, unsigned long offset,
+					  int userbuf);
 void vmcore_cleanup(void);
 
 /* Architecture code defines this if there are other possible ELF
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-29 15:43 ` [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
@ 2018-09-29 18:25   ` kbuild test robot
  2018-09-30  2:43     ` lijiang
  0 siblings, 1 reply; 7+ messages in thread
From: kbuild test robot @ 2018-09-29 18:25 UTC (permalink / raw)
  To: Lianbo Jiang
  Cc: kbuild-all, linux-kernel, kexec, tglx, mingo, hpa, x86, akpm,
	dan.j.williams, thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp,
	brijesh.singh, dyoung, bhe, jroedel

[-- Attachment #1: Type: text/plain, Size: 2318 bytes --]

Hi Lianbo,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on sof-driver-fuweitax/master]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Lianbo-Jiang/Support-kdump-for-AMD-secure-memory-encryption-SME/20180930-001539
base:   https://github.com/fuweitax/linux master
config: i386-randconfig-x0-09300051 (attached as .config)
compiler: gcc-5 (Debian 5.5.0-3) 5.4.1 20171010
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   fs/proc/vmcore.o: In function `read_from_oldmem':
>> fs/proc/vmcore.c:115: undefined reference to `copy_oldmem_page_encrypted'

vim +115 fs/proc/vmcore.c

    88	
    89	/* Reads a page from the oldmem device from given offset. */
    90	static ssize_t read_from_oldmem(char *buf, size_t count,
    91					u64 *ppos, int userbuf,
    92					bool encrypted)
    93	{
    94		unsigned long pfn, offset;
    95		size_t nr_bytes;
    96		ssize_t read = 0, tmp;
    97	
    98		if (!count)
    99			return 0;
   100	
   101		offset = (unsigned long)(*ppos % PAGE_SIZE);
   102		pfn = (unsigned long)(*ppos / PAGE_SIZE);
   103	
   104		do {
   105			if (count > (PAGE_SIZE - offset))
   106				nr_bytes = PAGE_SIZE - offset;
   107			else
   108				nr_bytes = count;
   109	
   110			/* If pfn is not ram, return zeros for sparse dump files */
   111			if (pfn_is_ram(pfn) == 0)
   112				memset(buf, 0, nr_bytes);
   113			else {
   114				if (encrypted)
 > 115					tmp = copy_oldmem_page_encrypted(pfn, buf,
   116									 nr_bytes,
   117									 offset,
   118									 userbuf);
   119				else
   120					tmp = copy_oldmem_page(pfn, buf, nr_bytes,
   121							       offset, userbuf);
   122	
   123				if (tmp < 0)
   124					return tmp;
   125			}
   126			*ppos += nr_bytes;
   127			count -= nr_bytes;
   128			buf += nr_bytes;
   129			read += nr_bytes;
   130			++pfn;
   131			offset = 0;
   132		} while (count);
   133	
   134		return read;
   135	}
   136	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 24793 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled
  2018-09-29 18:25   ` kbuild test robot
@ 2018-09-30  2:43     ` lijiang
  0 siblings, 0 replies; 7+ messages in thread
From: lijiang @ 2018-09-30  2:43 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, linux-kernel, kexec, tglx, mingo, hpa, x86, akpm,
	dan.j.williams, thomas.lendacky, bhelgaas, baiyaowei, tiwai, bp,
	brijesh.singh, dyoung, bhe, jroedel

在 2018年09月30日 02:25, kbuild test robot 写道:
> Hi Lianbo,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on sof-driver-fuweitax/master]
> [also build test ERROR on v4.19-rc5 next-20180928]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Lianbo-Jiang/Support-kdump-for-AMD-secure-memory-encryption-SME/20180930-001539
> base:   https://github.com/fuweitax/linux master
> config: i386-randconfig-x0-09300051 (attached as .config)
> compiler: gcc-5 (Debian 5.5.0-3) 5.4.1 20171010
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=i386 
> 
> All errors (new ones prefixed by >>):
> 
>    fs/proc/vmcore.o: In function `read_from_oldmem':
>>> fs/proc/vmcore.c:115: undefined reference to `copy_oldmem_page_encrypted'
> 
Ok, i will fix this compile error, and post again later.

Thanks.
> vim +115 fs/proc/vmcore.c
> 
>     88	
>     89	/* Reads a page from the oldmem device from given offset. */
>     90	static ssize_t read_from_oldmem(char *buf, size_t count,
>     91					u64 *ppos, int userbuf,
>     92					bool encrypted)
>     93	{
>     94		unsigned long pfn, offset;
>     95		size_t nr_bytes;
>     96		ssize_t read = 0, tmp;
>     97	
>     98		if (!count)
>     99			return 0;
>    100	
>    101		offset = (unsigned long)(*ppos % PAGE_SIZE);
>    102		pfn = (unsigned long)(*ppos / PAGE_SIZE);
>    103	
>    104		do {
>    105			if (count > (PAGE_SIZE - offset))
>    106				nr_bytes = PAGE_SIZE - offset;
>    107			else
>    108				nr_bytes = count;
>    109	
>    110			/* If pfn is not ram, return zeros for sparse dump files */
>    111			if (pfn_is_ram(pfn) == 0)
>    112				memset(buf, 0, nr_bytes);
>    113			else {
>    114				if (encrypted)
>  > 115					tmp = copy_oldmem_page_encrypted(pfn, buf,
>    116									 nr_bytes,
>    117									 offset,
>    118									 userbuf);
>    119				else
>    120					tmp = copy_oldmem_page(pfn, buf, nr_bytes,
>    121							       offset, userbuf);
>    122	
>    123				if (tmp < 0)
>    124					return tmp;
>    125			}
>    126			*ppos += nr_bytes;
>    127			count -= nr_bytes;
>    128			buf += nr_bytes;
>    129			read += nr_bytes;
>    130			++pfn;
>    131			offset = 0;
>    132		} while (count);
>    133	
>    134		return read;
>    135	}
>    136	
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-09-30  2:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-29 15:43 [PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME) Lianbo Jiang
2018-09-29 15:43 ` [PATCH 1/4 v8] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory Lianbo Jiang
2018-09-29 15:43 ` [PATCH 2/4 v8] kexec: allocate decrypted control pages for kdump in case SME is enabled Lianbo Jiang
2018-09-29 15:43 ` [PATCH 3/4 v8] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump Lianbo Jiang
2018-09-29 15:43 ` [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled Lianbo Jiang
2018-09-29 18:25   ` kbuild test robot
2018-09-30  2:43     ` lijiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).