Hello, This version mostly changes patch 2/3, removing dma_check_mask() from kernel/dma/mapping.c as suggested by Christoph Hellwig, and also adapting s390's <asm/mem_encrypt.h>. There's also a small change in patch 1/3 as mentioned in the changelog below. Patch 3/3 may or may not need to change s390 code depending on how Tom Lendacky's patch is fixed to avoid breaking that architecture, so I haven't made any changes for now. These patches are applied on top of today's master which at the time was at commit 9787aed57dd3 ("coresight: Make the coresight_device_fwnode_match declaration's fwnode parameter const"), plus a cherry-pick of commit e67a5ed1f86f ("dma-direct: Force unencrypted DMA under SME for certain DMA masks"), which is in dma-mapping/for-next and comes from this patch: https://lore.kernel.org/linux-iommu/10b83d9ff31bca88e94da2ff34e30619eb396078.1562785123.git.thomas.lendacky@amd.com/ I don't have a way to test SME, SEV, nor s390's PEF so the patches have only been build tested. Original cover letter below: Both powerpc¹ and s390² are adding <asm/mem_encrypt.h> headers. Currently, they have to supply definitions for functions and macros which only have a meaning on x86: sme_me_mask, sme_active() and sev_active(). Christoph Hellwig made a suggestion to "clean up the Kconfig and generic headers bits for memory encryption so that we don't need all this boilerplate code", and this is what this patch does. After this patch set, this is powerpc's <asm/mem_encrypt.h>: #ifndef _ASM_POWERPC_MEM_ENCRYPT_H #define _ASM_POWERPC_MEM_ENCRYPT_H #include <asm/svm.h> static inline bool mem_encrypt_active(void) { return is_secure_guest(); } static inline bool force_dma_unencrypted(struct device *dev) { return is_secure_guest(); } int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); #endif /* _ASM_POWERPC_MEM_ENCRYPT_H */ Changelog Since v1: - Patch "x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig" - Remove definition of ARCH_HAS_MEM_ENCRYPT from s390/Kconfig as well. - Reworded patch title and message a little bit. - Patch "DMA mapping: Move SME handling to x86-specific files" - Adapt s390's <asm/mem_encrypt.h> as well. - Remove dma_check_mask() from kernel/dma/mapping.c. Suggested by Christoph Hellwig. -- ¹ https://lore.kernel.org/linuxppc-dev/20190521044912.1375-12-bauerman@linux.ibm.com/ ² https://lore.kernel.org/kvm/20190612111236.99538-2-pasic@linux.ibm.com/ Thiago Jung Bauermann (3): x86,s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig DMA mapping: Move SME handling to x86-specific files fs/core/vmcore: Move sev_active() reference to x86 arch code arch/Kconfig | 3 +++ arch/s390/Kconfig | 3 --- arch/s390/include/asm/mem_encrypt.h | 4 +--- arch/x86/Kconfig | 4 +--- arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++ arch/x86/kernel/crash_dump_64.c | 5 +++++ fs/proc/vmcore.c | 8 ++++---- include/linux/crash_dump.h | 14 ++++++++++++++ include/linux/mem_encrypt.h | 15 +-------------- kernel/dma/mapping.c | 8 -------- kernel/dma/swiotlb.c | 3 +-- 11 files changed, 40 insertions(+), 37 deletions(-)
powerpc is also going to use this feature, so put it in a generic location. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> --- arch/Kconfig | 3 +++ arch/s390/Kconfig | 3 --- arch/x86/Kconfig | 4 +--- 3 files changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index c47b328eada0..4ef3499d4480 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -927,6 +927,9 @@ config LOCK_EVENT_COUNTS the chance of application behavior change because of timing differences. The counts are reported via debugfs. +config ARCH_HAS_MEM_ENCRYPT + bool + source "kernel/gcov/Kconfig" source "scripts/gcc-plugins/Kconfig" diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 5d8570ed6cab..f820e631bf89 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -1,7 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -config ARCH_HAS_MEM_ENCRYPT - def_bool y - config MMU def_bool y diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index c9f331bb538b..5d3295f2df94 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -68,6 +68,7 @@ config X86 select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_KCOV if X86_64 + select ARCH_HAS_MEM_ENCRYPT select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PTE_SPECIAL @@ -1520,9 +1521,6 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. -config ARCH_HAS_MEM_ENCRYPT - def_bool y - config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD
Secure Memory Encryption is an x86-specific feature, so it shouldn't appear in generic kernel code. In DMA mapping code, Christoph Hellwig mentioned that "There is no reason why we should have a special debug printk just for one specific reason why there is a requirement for a large DMA mask.", so we just remove dma_check_mask(). In SWIOTLB code, there's no need to mention which memory encryption feature is active, so just use a more generic warning. Besides, other architectures will have different names for similar technology. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> --- arch/s390/include/asm/mem_encrypt.h | 4 +--- arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++ include/linux/mem_encrypt.h | 14 +------------- kernel/dma/mapping.c | 8 -------- kernel/dma/swiotlb.c | 3 +-- 5 files changed, 13 insertions(+), 26 deletions(-) diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h index 3eb018508190..ff813a56bc30 100644 --- a/arch/s390/include/asm/mem_encrypt.h +++ b/arch/s390/include/asm/mem_encrypt.h @@ -4,9 +4,7 @@ #ifndef __ASSEMBLY__ -#define sme_me_mask 0ULL - -static inline bool sme_active(void) { return false; } +static inline bool mem_encrypt_active(void) { return false; } extern bool sev_active(void); int set_memory_encrypted(unsigned long addr, int numpages); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 0c196c47d621..848ce43b9040 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -92,6 +92,16 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[]; +static inline bool mem_encrypt_active(void) +{ + return sme_me_mask; +} + +static inline u64 sme_get_me_mask(void) +{ + return sme_me_mask; +} + #endif /* __ASSEMBLY__ */ #endif /* __X86_MEM_ENCRYPT_H__ */ diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 470bd53a89df..0c5b0ff9eb29 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -18,23 +18,11 @@ #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */ -#define sme_me_mask 0ULL - -static inline bool sme_active(void) { return false; } +static inline bool mem_encrypt_active(void) { return false; } static inline bool sev_active(void) { return false; } #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */ -static inline bool mem_encrypt_active(void) -{ - return sme_me_mask; -} - -static inline u64 sme_get_me_mask(void) -{ - return sme_me_mask; -} - #ifdef CONFIG_AMD_MEM_ENCRYPT /* * The __sme_set() and __sme_clr() macros are useful for adding or removing diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index f7afdadb6770..b53fc7ec4914 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -291,12 +291,6 @@ void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, } EXPORT_SYMBOL(dma_free_attrs); -static inline void dma_check_mask(struct device *dev, u64 mask) -{ - if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1))) - dev_warn(dev, "SME is active, device will require DMA bounce buffers\n"); -} - int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev); @@ -321,7 +315,6 @@ int dma_set_mask(struct device *dev, u64 mask) return -EIO; arch_dma_set_mask(dev, mask); - dma_check_mask(dev, mask); *dev->dma_mask = mask; return 0; } @@ -333,7 +326,6 @@ int dma_set_coherent_mask(struct device *dev, u64 mask) if (!dma_supported(dev, mask)) return -EIO; - dma_check_mask(dev, mask); dev->coherent_dma_mask = mask; return 0; } diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 62fa5a82a065..e52401f94e91 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -459,8 +459,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); if (mem_encrypt_active()) - pr_warn_once("%s is active and system is using DMA bounce buffers\n", - sme_active() ? "SME" : "SEV"); + pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); mask = dma_get_seg_boundary(hwdev);
Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't appear in generic kernel code because it forces non-x86 architectures to define the sev_active() function, which doesn't make a lot of sense. To solve this problem, add an x86 elfcorehdr_read() function to override the generic weak implementation. To do that, it's necessary to make read_from_oldmem() public so that it can be used outside of vmcore.c. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> --- arch/x86/kernel/crash_dump_64.c | 5 +++++ fs/proc/vmcore.c | 8 ++++---- include/linux/crash_dump.h | 14 ++++++++++++++ include/linux/mem_encrypt.h | 1 - 4 files changed, 23 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c index 22369dd5de3b..045e82e8945b 100644 --- a/arch/x86/kernel/crash_dump_64.c +++ b/arch/x86/kernel/crash_dump_64.c @@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize, { return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true); } + +ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos) +{ + return read_from_oldmem(buf, count, ppos, 0, sev_active()); +} diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 57957c91c6df..ca1f20bedd8c 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn) } /* Reads a page from the oldmem device from given offset. */ -static ssize_t read_from_oldmem(char *buf, size_t count, - u64 *ppos, int userbuf, - bool encrypted) +ssize_t read_from_oldmem(char *buf, size_t count, + u64 *ppos, int userbuf, + bool encrypted) { unsigned long pfn, offset; size_t nr_bytes; @@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr) */ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, sev_active()); + return read_from_oldmem(buf, count, ppos, 0, false); } /* diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h index f774c5eb9e3c..4664fc1871de 100644 --- a/include/linux/crash_dump.h +++ b/include/linux/crash_dump.h @@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data) return -EOPNOTSUPP; } #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ + +#ifdef CONFIG_PROC_VMCORE +ssize_t read_from_oldmem(char *buf, size_t count, + u64 *ppos, int userbuf, + bool encrypted); +#else +static inline ssize_t read_from_oldmem(char *buf, size_t count, + u64 *ppos, int userbuf, + bool encrypted) +{ + return -EOPNOTSUPP; +} +#endif /* CONFIG_PROC_VMCORE */ + #endif /* LINUX_CRASHDUMP_H */ diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 0c5b0ff9eb29..5c4a18a91f89 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -19,7 +19,6 @@ #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */ static inline bool mem_encrypt_active(void) { return false; } -static inline bool sev_active(void) { return false; } #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
I forgot to mark this series as v2 when generating the patches. Sorry for the confusion. -- Thiago Jung Bauermann IBM Linux Technology Center
On Sat, Jul 13, 2019 at 01:45:52AM -0300, Thiago Jung Bauermann wrote:
> powerpc is also going to use this feature, so put it in a generic location.
Looks good,
even without a third arch using it we should never habe symbols defined
under arch/$(ARCH) that are used in common code to start with.
Reviewed-by: Christoph Hellwig <hch@lst.de>
While this looks generally good to me, I think we want to split this into three patches: 1) update the swiotlb printk 2) removing the dma-mapping check and printk 3) clean up the mem_encrypt.h interface.
On 2019-07-12 23:45, Thiago Jung Bauermann wrote: > powerpc is also going to use this feature, so put it in a generic > location. > > Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> > Reviewed-by: Thomas Gleixner <tglx@linutronix.de> > --- > arch/Kconfig | 3 +++ > arch/s390/Kconfig | 3 --- > arch/x86/Kconfig | 4 +--- > 3 files changed, 4 insertions(+), 6 deletions(-) > > diff --git a/arch/Kconfig b/arch/Kconfig > index c47b328eada0..4ef3499d4480 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -927,6 +927,9 @@ config LOCK_EVENT_COUNTS > the chance of application behavior change because of timing > differences. The counts are reported via debugfs. > > +config ARCH_HAS_MEM_ENCRYPT > + bool > + > source "kernel/gcov/Kconfig" > > source "scripts/gcc-plugins/Kconfig" > diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig > index 5d8570ed6cab..f820e631bf89 100644 > --- a/arch/s390/Kconfig > +++ b/arch/s390/Kconfig > @@ -1,7 +1,4 @@ > # SPDX-License-Identifier: GPL-2.0 > -config ARCH_HAS_MEM_ENCRYPT > - def_bool y > - Since you are removing the "def_bool y" when ARCH_HAS_MEM_ENCRYPT is moved to arch/Kconfig, does the s390/Kconfig need "select ARCH_HAS_MEM_ENCRYPT" added like you do for x86/Kconfig? - Janani > config MMU > def_bool y > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index c9f331bb538b..5d3295f2df94 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -68,6 +68,7 @@ config X86 > select ARCH_HAS_FORTIFY_SOURCE > select ARCH_HAS_GCOV_PROFILE_ALL > select ARCH_HAS_KCOV if X86_64 > + select ARCH_HAS_MEM_ENCRYPT > select ARCH_HAS_MEMBARRIER_SYNC_CORE > select ARCH_HAS_PMEM_API if X86_64 > select ARCH_HAS_PTE_SPECIAL > @@ -1520,9 +1521,6 @@ config X86_CPA_STATISTICS > helps to determine the effectiveness of preserving large and huge > page mappings when mapping protections are changed. > > -config ARCH_HAS_MEM_ENCRYPT > - def_bool y > - > config AMD_MEM_ENCRYPT > bool "AMD Secure Memory Encryption (SME) support" > depends on X86_64 && CPU_SUP_AMD
Hello Janani, Thanks for reviewing the patch. janani <janani@linux.ibm.com> writes: > On 2019-07-12 23:45, Thiago Jung Bauermann wrote: >> powerpc is also going to use this feature, so put it in a generic location. >> >> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> >> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> >> --- >> arch/Kconfig | 3 +++ >> arch/s390/Kconfig | 3 --- >> arch/x86/Kconfig | 4 +--- >> 3 files changed, 4 insertions(+), 6 deletions(-) >> >> diff --git a/arch/Kconfig b/arch/Kconfig >> index c47b328eada0..4ef3499d4480 100644 >> --- a/arch/Kconfig >> +++ b/arch/Kconfig >> @@ -927,6 +927,9 @@ config LOCK_EVENT_COUNTS >> the chance of application behavior change because of timing >> differences. The counts are reported via debugfs. >> >> +config ARCH_HAS_MEM_ENCRYPT >> + bool >> + >> source "kernel/gcov/Kconfig" >> >> source "scripts/gcc-plugins/Kconfig" >> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig >> index 5d8570ed6cab..f820e631bf89 100644 >> --- a/arch/s390/Kconfig >> +++ b/arch/s390/Kconfig >> @@ -1,7 +1,4 @@ >> # SPDX-License-Identifier: GPL-2.0 >> -config ARCH_HAS_MEM_ENCRYPT >> - def_bool y >> - > > Since you are removing the "def_bool y" when ARCH_HAS_MEM_ENCRYPT is moved to > arch/Kconfig, does the s390/Kconfig need "select ARCH_HAS_MEM_ENCRYPT" added > like you do for x86/Kconfig? Indeed, I missed that. Thanks for spotting it! > > - Janani > >> config MMU >> def_bool y >> >> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig >> index c9f331bb538b..5d3295f2df94 100644 >> --- a/arch/x86/Kconfig >> +++ b/arch/x86/Kconfig >> @@ -68,6 +68,7 @@ config X86 >> select ARCH_HAS_FORTIFY_SOURCE >> select ARCH_HAS_GCOV_PROFILE_ALL >> select ARCH_HAS_KCOV if X86_64 >> + select ARCH_HAS_MEM_ENCRYPT >> select ARCH_HAS_MEMBARRIER_SYNC_CORE >> select ARCH_HAS_PMEM_API if X86_64 >> select ARCH_HAS_PTE_SPECIAL >> @@ -1520,9 +1521,6 @@ config X86_CPA_STATISTICS >> helps to determine the effectiveness of preserving large and huge >> page mappings when mapping protections are changed. >> >> -config ARCH_HAS_MEM_ENCRYPT >> - def_bool y >> - >> config AMD_MEM_ENCRYPT >> bool "AMD Secure Memory Encryption (SME) support" >> depends on X86_64 && CPU_SUP_AMD -- Thiago Jung Bauermann IBM Linux Technology Center