IOMMU Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/3] Remove x86-specific code from generic headers
@ 2019-07-12  5:36 Thiago Jung Bauermann
  2019-07-12  5:36 ` [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig Thiago Jung Bauermann
                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12  5:36 UTC (permalink / raw)
  To: x86
  Cc: linux-s390, Konrad Rzeszutek Wilk, Robin Murphy, Mike Anderson,
	Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig

Hello,

Both powerpc¹ and s390² are adding <asm/mem_encrypt.h> headers. Currently,
they have to supply definitions for functions and macros which only have a
meaning on x86: sme_me_mask, sme_active() and sev_active().

Christoph Hellwig made a suggestion to "clean up the Kconfig and generic
headers bits for memory encryption so that we don't need all this
boilerplate code", and this is what this series does.

After this patch set, this is powerpc's <asm/mem_encrypt.h>:

    #ifndef _ASM_POWERPC_MEM_ENCRYPT_H
    #define _ASM_POWERPC_MEM_ENCRYPT_H

    #include <asm/svm.h>

    static inline bool mem_encrypt_active(void)
    {
	    return is_secure_guest();
    }

    static inline bool force_dma_unencrypted(struct device *dev)
    {
	    return is_secure_guest();
    }

    int set_memory_encrypted(unsigned long addr, int numpages);
    int set_memory_decrypted(unsigned long addr, int numpages);

    #endif /* _ASM_POWERPC_MEM_ENCRYPT_H */

I don't have a way to test SME nor SEV, so the patches have only been build
tested. They assume the presence of the following two commits:

Commit 4eb5fec31e61 ("fs/proc/vmcore: Enable dumping of encrypted memory
when SEV was active"), which is now in Linus' master branch;

Commit e67a5ed1f86f ("dma-direct: Force unencrypted DMA under SME for
certain DMA masks"), which is in dma-mapping/for-next and comes from this
patch:

https://lore.kernel.org/linux-iommu/10b83d9ff31bca88e94da2ff34e30619eb396078.1562785123.git.thomas.lendacky@amd.com/

Thiago Jung Bauermann (3):
  x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig
  DMA mapping: Move SME handling to x86-specific files
  fs/core/vmcore: Move sev_active() reference to x86 arch code

 arch/Kconfig                       |  3 +++
 arch/x86/Kconfig                   |  5 ++---
 arch/x86/include/asm/dma-mapping.h |  7 +++++++
 arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
 arch/x86/kernel/crash_dump_64.c    |  5 +++++
 fs/proc/vmcore.c                   |  8 ++++----
 include/linux/crash_dump.h         | 14 ++++++++++++++
 include/linux/mem_encrypt.h        | 15 +--------------
 kernel/dma/Kconfig                 |  3 +++
 kernel/dma/mapping.c               |  4 ++--
 kernel/dma/swiotlb.c               |  3 +--
 11 files changed, 52 insertions(+), 25 deletions(-)

-- 

¹ https://lore.kernel.org/linuxppc-dev/20190521044912.1375-12-bauerman@linux.ibm.com/
² https://lore.kernel.org/kvm/20190612111236.99538-2-pasic@linux.ibm.com/

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig
  2019-07-12  5:36 [PATCH 0/3] Remove x86-specific code from generic headers Thiago Jung Bauermann
@ 2019-07-12  5:36 ` Thiago Jung Bauermann
  2019-07-12 16:04   ` Thomas Gleixner
  2019-07-12  5:36 ` [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files Thiago Jung Bauermann
  2019-07-12  5:36 ` [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code Thiago Jung Bauermann
  2 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12  5:36 UTC (permalink / raw)
  To: x86
  Cc: linux-s390, Konrad Rzeszutek Wilk, Robin Murphy, Mike Anderson,
	Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig

powerpc and s390 are going to use this feature as well, so put it in a
generic location.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/Kconfig     | 3 +++
 arch/x86/Kconfig | 4 +---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index c47b328eada0..4ef3499d4480 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -927,6 +927,9 @@ config LOCK_EVENT_COUNTS
 	  the chance of application behavior change because of timing
 	  differences. The counts are reported via debugfs.
 
+config ARCH_HAS_MEM_ENCRYPT
+	bool
+
 source "kernel/gcov/Kconfig"
 
 source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 12e02a8f9de7..7f4d28da8fe3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -67,6 +67,7 @@ config X86
 	select ARCH_HAS_FORTIFY_SOURCE
 	select ARCH_HAS_GCOV_PROFILE_ALL
 	select ARCH_HAS_KCOV			if X86_64
+	select ARCH_HAS_MEM_ENCRYPT
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
 	select ARCH_HAS_PMEM_API		if X86_64
 	select ARCH_HAS_PTE_SPECIAL
@@ -1500,9 +1501,6 @@ config X86_CPA_STATISTICS
 	  helps to determine the effectiveness of preserving large and huge
 	  page mappings when mapping protections are changed.
 
-config ARCH_HAS_MEM_ENCRYPT
-	def_bool y
-
 config AMD_MEM_ENCRYPT
 	bool "AMD Secure Memory Encryption (SME) support"
 	depends on X86_64 && CPU_SUP_AMD
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-12  5:36 [PATCH 0/3] Remove x86-specific code from generic headers Thiago Jung Bauermann
  2019-07-12  5:36 ` [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig Thiago Jung Bauermann
@ 2019-07-12  5:36 ` Thiago Jung Bauermann
  2019-07-12  7:13   ` Christoph Hellwig
                     ` (2 more replies)
  2019-07-12  5:36 ` [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code Thiago Jung Bauermann
  2 siblings, 3 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12  5:36 UTC (permalink / raw)
  To: x86
  Cc: linux-s390, Konrad Rzeszutek Wilk, Robin Murphy, Mike Anderson,
	Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig

Secure Memory Encryption is an x86-specific feature, so it shouldn't appear
in generic kernel code.

Introduce ARCH_HAS_DMA_CHECK_MASK so that x86 can define its own
dma_check_mask() for the SME check.

In SWIOTLB code, there's no need to mention which memory encryption
feature is active. Also, other architectures will have different names so
this gets unwieldy quickly.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/dma-mapping.h |  7 +++++++
 arch/x86/include/asm/mem_encrypt.h | 10 ++++++++++
 include/linux/mem_encrypt.h        | 14 +-------------
 kernel/dma/Kconfig                 |  3 +++
 kernel/dma/mapping.c               |  4 ++--
 kernel/dma/swiotlb.c               |  3 +--
 7 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7f4d28da8fe3..dbabe42e7f1c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -61,6 +61,7 @@ config X86
 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
 	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
+	select ARCH_HAS_DMA_CHECK_MASK
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_FAST_MULTIPLIER
 	select ARCH_HAS_FILTER_PGPROT
diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index 6b15a24930e0..55e710ba95a5 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -12,6 +12,7 @@
 #include <asm/io.h>
 #include <asm/swiotlb.h>
 #include <linux/dma-contiguous.h>
+#include <linux/mem_encrypt.h>
 
 extern int iommu_merge;
 extern int panic_on_overflow;
@@ -23,4 +24,10 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 	return dma_ops;
 }
 
+static inline void dma_check_mask(struct device *dev, u64 mask)
+{
+	if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1)))
+		dev_warn(dev, "SME is active, device will require DMA bounce buffers\n");
+}
+
 #endif
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 616f8e637bc3..e4c9e1a57d25 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -95,6 +95,16 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
 
 extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
 
+static inline bool mem_encrypt_active(void)
+{
+	return sme_me_mask;
+}
+
+static inline u64 sme_get_me_mask(void)
+{
+	return sme_me_mask;
+}
+
 #endif	/* __ASSEMBLY__ */
 
 #endif	/* __X86_MEM_ENCRYPT_H__ */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index b310a9c18113..f2e399fb626b 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -21,23 +21,11 @@
 
 #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
-#define sme_me_mask	0ULL
-
-static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
+static inline bool mem_encrypt_active(void) { return false; }
 
 #endif	/* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
-static inline bool mem_encrypt_active(void)
-{
-	return sme_me_mask;
-}
-
-static inline u64 sme_get_me_mask(void)
-{
-	return sme_me_mask;
-}
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * The __sme_set() and __sme_clr() macros are useful for adding or removing
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 9decbba255fc..34b44bfba372 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -17,6 +17,9 @@ config ARCH_DMA_ADDR_T_64BIT
 config ARCH_HAS_DMA_COHERENCE_H
 	bool
 
+config ARCH_HAS_DMA_CHECK_MASK
+	bool
+
 config ARCH_HAS_DMA_SET_MASK
 	bool
 
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index f7afdadb6770..ed46f88378d4 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -291,11 +291,11 @@ void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
 }
 EXPORT_SYMBOL(dma_free_attrs);
 
+#ifndef CONFIG_ARCH_HAS_DMA_CHECK_MASK
 static inline void dma_check_mask(struct device *dev, u64 mask)
 {
-	if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1)))
-		dev_warn(dev, "SME is active, device will require DMA bounce buffers\n");
 }
+#endif
 
 int dma_supported(struct device *dev, u64 mask)
 {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 13f0cb080a4d..67482ad6aab2 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -459,8 +459,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
 		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
 
 	if (mem_encrypt_active())
-		pr_warn_once("%s is active and system is using DMA bounce buffers\n",
-			     sme_active() ? "SME" : "SEV");
+		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
 
 	mask = dma_get_seg_boundary(hwdev);
 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12  5:36 [PATCH 0/3] Remove x86-specific code from generic headers Thiago Jung Bauermann
  2019-07-12  5:36 ` [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig Thiago Jung Bauermann
  2019-07-12  5:36 ` [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files Thiago Jung Bauermann
@ 2019-07-12  5:36 ` Thiago Jung Bauermann
  2019-07-12 13:09   ` Halil Pasic
  2 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12  5:36 UTC (permalink / raw)
  To: x86
  Cc: linux-s390, Konrad Rzeszutek Wilk, Robin Murphy, Mike Anderson,
	Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig

Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
appear in generic kernel code because it forces non-x86 architectures to
define the sev_active() function, which doesn't make a lot of sense.

To solve this problem, add an x86 elfcorehdr_read() function to override
the generic weak implementation. To do that, it's necessary to make
read_from_oldmem() public so that it can be used outside of vmcore.c.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/x86/kernel/crash_dump_64.c |  5 +++++
 fs/proc/vmcore.c                |  8 ++++----
 include/linux/crash_dump.h      | 14 ++++++++++++++
 include/linux/mem_encrypt.h     |  1 -
 4 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 22369dd5de3b..045e82e8945b 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
 {
 	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
 }
+
+ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
+{
+	return read_from_oldmem(buf, count, ppos, 0, sev_active());
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 57957c91c6df..ca1f20bedd8c 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
 }
 
 /* Reads a page from the oldmem device from given offset. */
-static ssize_t read_from_oldmem(char *buf, size_t count,
-				u64 *ppos, int userbuf,
-				bool encrypted)
+ssize_t read_from_oldmem(char *buf, size_t count,
+			 u64 *ppos, int userbuf,
+			 bool encrypted)
 {
 	unsigned long pfn, offset;
 	size_t nr_bytes;
@@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
  */
 ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-	return read_from_oldmem(buf, count, ppos, 0, sev_active());
+	return read_from_oldmem(buf, count, ppos, 0, false);
 }
 
 /*
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index f774c5eb9e3c..4664fc1871de 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
 	return -EOPNOTSUPP;
 }
 #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
+
+#ifdef CONFIG_PROC_VMCORE
+ssize_t read_from_oldmem(char *buf, size_t count,
+			 u64 *ppos, int userbuf,
+			 bool encrypted);
+#else
+static inline ssize_t read_from_oldmem(char *buf, size_t count,
+				       u64 *ppos, int userbuf,
+				       bool encrypted)
+{
+	return -EOPNOTSUPP;
+}
+#endif /* CONFIG_PROC_VMCORE */
+
 #endif /* LINUX_CRASHDUMP_H */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index f2e399fb626b..a3747fcae466 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -21,7 +21,6 @@
 
 #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
-static inline bool sev_active(void) { return false; }
 static inline bool mem_encrypt_active(void) { return false; }
 
 #endif	/* CONFIG_ARCH_HAS_MEM_ENCRYPT */
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-12  5:36 ` [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files Thiago Jung Bauermann
@ 2019-07-12  7:13   ` Christoph Hellwig
  2019-07-12 23:42     ` Thiago Jung Bauermann
  2019-07-12 16:09   ` Thomas Gleixner
  2019-07-19  9:05   ` kbuild test robot
  2 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-07-12  7:13 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig

Honestly I think this code should go away without any replacement.
There is no reason why we should have a special debug printk just
for one specific reason why there is a requirement for a large DMA
mask.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12  5:36 ` [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code Thiago Jung Bauermann
@ 2019-07-12 13:09   ` Halil Pasic
  2019-07-12 14:08     ` Christoph Hellwig
  2019-07-12 21:55     ` Thiago Jung Bauermann
  0 siblings, 2 replies; 23+ messages in thread
From: Halil Pasic @ 2019-07-12 13:09 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-fsdevel, Thomas Gleixner,
	linuxppc-dev, Christoph Hellwig

On Fri, 12 Jul 2019 02:36:31 -0300
Thiago Jung Bauermann <bauerman@linux.ibm.com> wrote:

> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
> appear in generic kernel code because it forces non-x86 architectures to
> define the sev_active() function, which doesn't make a lot of sense.

sev_active() might be just bad (too specific) name for a general
concept. s390 code defines it drives the right behavior in
kernel/dma/direct.c (which uses it).

> 
> To solve this problem, add an x86 elfcorehdr_read() function to override
> the generic weak implementation. To do that, it's necessary to make
> read_from_oldmem() public so that it can be used outside of vmcore.c.
> 
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> ---
>  arch/x86/kernel/crash_dump_64.c |  5 +++++
>  fs/proc/vmcore.c                |  8 ++++----
>  include/linux/crash_dump.h      | 14 ++++++++++++++
>  include/linux/mem_encrypt.h     |  1 -
>  4 files changed, 23 insertions(+), 5 deletions(-)

Does not seem to apply to today's or yesterdays master.

> 
> diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
> index 22369dd5de3b..045e82e8945b 100644
> --- a/arch/x86/kernel/crash_dump_64.c
> +++ b/arch/x86/kernel/crash_dump_64.c
> @@ -70,3 +70,8 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>  {
>  	return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
>  }
> +
> +ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> +{
> +	return read_from_oldmem(buf, count, ppos, 0, sev_active());
> +}
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 57957c91c6df..ca1f20bedd8c 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -100,9 +100,9 @@ static int pfn_is_ram(unsigned long pfn)
>  }
>  
>  /* Reads a page from the oldmem device from given offset. */
> -static ssize_t read_from_oldmem(char *buf, size_t count,
> -				u64 *ppos, int userbuf,
> -				bool encrypted)
> +ssize_t read_from_oldmem(char *buf, size_t count,
> +			 u64 *ppos, int userbuf,
> +			 bool encrypted)
>  {
>  	unsigned long pfn, offset;
>  	size_t nr_bytes;
> @@ -166,7 +166,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
>   */
>  ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
>  {
> -	return read_from_oldmem(buf, count, ppos, 0, sev_active());
> +	return read_from_oldmem(buf, count, ppos, 0, false);
>  }
>  
>  /*
> diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
> index f774c5eb9e3c..4664fc1871de 100644
> --- a/include/linux/crash_dump.h
> +++ b/include/linux/crash_dump.h
> @@ -115,4 +115,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
>  	return -EOPNOTSUPP;
>  }
>  #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
> +
> +#ifdef CONFIG_PROC_VMCORE
> +ssize_t read_from_oldmem(char *buf, size_t count,
> +			 u64 *ppos, int userbuf,
> +			 bool encrypted);
> +#else
> +static inline ssize_t read_from_oldmem(char *buf, size_t count,
> +				       u64 *ppos, int userbuf,
> +				       bool encrypted)
> +{
> +	return -EOPNOTSUPP;
> +}
> +#endif /* CONFIG_PROC_VMCORE */
> +
>  #endif /* LINUX_CRASHDUMP_H */
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index f2e399fb626b..a3747fcae466 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -21,7 +21,6 @@
>  
>  #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>  
> -static inline bool sev_active(void) { return false; }

This is the implementation for the guys that don't
have ARCH_HAS_MEM_ENCRYPT.

Means sev_active() may not be used in such code after this
patch. What about 

static inline bool force_dma_unencrypted(void)
{
        return sev_active();
}

in kernel/dma/direct.c?

Regards,
Halil

>  static inline bool mem_encrypt_active(void) { return false; }
>  
>  #endif	/* CONFIG_ARCH_HAS_MEM_ENCRYPT */

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 13:09   ` Halil Pasic
@ 2019-07-12 14:08     ` Christoph Hellwig
  2019-07-12 14:51       ` Halil Pasic
  2019-07-12 21:55     ` Thiago Jung Bauermann
  1 sibling, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-07-12 14:08 UTC (permalink / raw)
  To: Halil Pasic
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-fsdevel, Thomas Gleixner,
	linuxppc-dev, Christoph Hellwig

On Fri, Jul 12, 2019 at 03:09:12PM +0200, Halil Pasic wrote:
> This is the implementation for the guys that don't
> have ARCH_HAS_MEM_ENCRYPT.
> 
> Means sev_active() may not be used in such code after this
> patch. What about 
> 
> static inline bool force_dma_unencrypted(void)
> {
>         return sev_active();
> }
> 
> in kernel/dma/direct.c?

FYI, I have this pending in the dma-mapping tree:

http://git.infradead.org/users/hch/dma-mapping.git/commitdiff/e67a5ed1f86f4370991c601f2fcad9ebf9e1eebb
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 14:08     ` Christoph Hellwig
@ 2019-07-12 14:51       ` Halil Pasic
  2019-07-12 15:11         ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Halil Pasic @ 2019-07-12 14:51 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, iommu, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, linux-fsdevel, Thomas Gleixner, linuxppc-dev,
	Alexey Dobriyan

On Fri, 12 Jul 2019 16:08:12 +0200
Christoph Hellwig <hch@lst.de> wrote:

> On Fri, Jul 12, 2019 at 03:09:12PM +0200, Halil Pasic wrote:
> > This is the implementation for the guys that don't
> > have ARCH_HAS_MEM_ENCRYPT.
> > 
> > Means sev_active() may not be used in such code after this
> > patch. What about 
> > 
> > static inline bool force_dma_unencrypted(void)
> > {
> >         return sev_active();
> > }
> > 
> > in kernel/dma/direct.c?
> 
> FYI, I have this pending in the dma-mapping tree:
> 
> http://git.infradead.org/users/hch/dma-mapping.git/commitdiff/e67a5ed1f86f4370991c601f2fcad9ebf9e1eebb

Thank you very much! I will have another look, but it seems to me,
without further measures taken, this would break protected virtualization
support on s390. The effect of the che for s390 is that
force_dma_unencrypted() will always return false instead calling into
the platform code like it did before the patch, right?

Should I send a  Fixes: e67a5ed1f86f "dma-direct: Force unencrypted DMA
under SME for certain DMA masks" (Tom Lendacky, 2019-07-10) patch that
rectifies things for s390 or how do we want handle this?

Regards,
Halil

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 14:51       ` Halil Pasic
@ 2019-07-12 15:11         ` Christoph Hellwig
  2019-07-12 15:42           ` Halil Pasic
  0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-07-12 15:11 UTC (permalink / raw)
  To: Halil Pasic
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-fsdevel, Thomas Gleixner,
	linuxppc-dev, Christoph Hellwig

On Fri, Jul 12, 2019 at 04:51:53PM +0200, Halil Pasic wrote:
> Thank you very much! I will have another look, but it seems to me,
> without further measures taken, this would break protected virtualization
> support on s390. The effect of the che for s390 is that
> force_dma_unencrypted() will always return false instead calling into
> the platform code like it did before the patch, right?
> 
> Should I send a  Fixes: e67a5ed1f86f "dma-direct: Force unencrypted DMA
> under SME for certain DMA masks" (Tom Lendacky, 2019-07-10) patch that
> rectifies things for s390 or how do we want handle this?

Yes, please do.  I hadn't noticed the s390 support had landed in
mainline already.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 15:11         ` Christoph Hellwig
@ 2019-07-12 15:42           ` Halil Pasic
  2019-07-13  8:08             ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Halil Pasic @ 2019-07-12 15:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, iommu, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, linux-fsdevel, Thomas Gleixner, linuxppc-dev,
	Alexey Dobriyan

On Fri, 12 Jul 2019 17:11:29 +0200
Christoph Hellwig <hch@lst.de> wrote:

> On Fri, Jul 12, 2019 at 04:51:53PM +0200, Halil Pasic wrote:
> > Thank you very much! I will have another look, but it seems to me,
> > without further measures taken, this would break protected virtualization
> > support on s390. The effect of the che for s390 is that
> > force_dma_unencrypted() will always return false instead calling into
> > the platform code like it did before the patch, right?
> > 
> > Should I send a  Fixes: e67a5ed1f86f "dma-direct: Force unencrypted DMA
> > under SME for certain DMA masks" (Tom Lendacky, 2019-07-10) patch that
> > rectifies things for s390 or how do we want handle this?
> 
> Yes, please do.  I hadn't noticed the s390 support had landed in
> mainline already.
> 

Will do! I guess I should do the patch against the for-next branch of the
dma-mapping tree. But that branch does not have the s390 support patches (yet?).
To fix it I need both e67a5ed1f86f and 64e1f0c531d1 "s390/mm: force
swiotlb for protected virtualization" (Halil Pasic, 2018-09-13). Or
should I wait for e67a5ed1f86f landing in mainline?

Regards,
Halil

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig
  2019-07-12  5:36 ` [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig Thiago Jung Bauermann
@ 2019-07-12 16:04   ` Thomas Gleixner
  2019-07-12 23:35     ` Thiago Jung Bauermann
  0 siblings, 1 reply; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-12 16:04 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	linuxppc-dev, Christoph Hellwig

On Fri, 12 Jul 2019, Thiago Jung Bauermann wrote:

> powerpc and s390 are going to use this feature as well, so put it in a
> generic location.
> 
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-12  5:36 ` [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files Thiago Jung Bauermann
  2019-07-12  7:13   ` Christoph Hellwig
@ 2019-07-12 16:09   ` Thomas Gleixner
  2019-07-18 19:47     ` Thiago Jung Bauermann
  2019-07-19  9:05   ` kbuild test robot
  2 siblings, 1 reply; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-12 16:09 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	linuxppc-dev, Christoph Hellwig

On Fri, 12 Jul 2019, Thiago Jung Bauermann wrote:
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index b310a9c18113..f2e399fb626b 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -21,23 +21,11 @@
>  
>  #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>  
> -#define sme_me_mask	0ULL
> -
> -static inline bool sme_active(void) { return false; }
>  static inline bool sev_active(void) { return false; }

You want to move out sev_active as well, the only relevant thing is
mem_encrypt_active(). Everything SME/SEV is an architecture detail.

> +static inline bool mem_encrypt_active(void) { return false; }

Thanks,

	tglx
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 13:09   ` Halil Pasic
  2019-07-12 14:08     ` Christoph Hellwig
@ 2019-07-12 21:55     ` Thiago Jung Bauermann
  2019-07-15 14:03       ` Halil Pasic
  1 sibling, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12 21:55 UTC (permalink / raw)
  To: Halil Pasic
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu, Ingo Molnar,
	Borislav Petkov, Lendacky, Thomas, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig


[ Cc'ing Tom Lendacky which I forgot to do earlier. Sorry about that. ]

Hello Halil,

Thanks for the quick review.

Halil Pasic <pasic@linux.ibm.com> writes:

> On Fri, 12 Jul 2019 02:36:31 -0300
> Thiago Jung Bauermann <bauerman@linux.ibm.com> wrote:
>
>> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
>> appear in generic kernel code because it forces non-x86 architectures to
>> define the sev_active() function, which doesn't make a lot of sense.
>
> sev_active() might be just bad (too specific) name for a general
> concept. s390 code defines it drives the right behavior in
> kernel/dma/direct.c (which uses it).

I thought about that but couldn't put my finger on a general concept.
Is it "guest with memory inaccessible to the host"?

Since your proposed definiton for force_dma_unencrypted() is simply to
make it equivalent to sev_active(), I thought it was more
straightforward to make each arch define force_dma_unencrypted()
directly.

Also, does sev_active() drive the right behavior for s390 in
elfcorehdr_read() as well?

>> To solve this problem, add an x86 elfcorehdr_read() function to override
>> the generic weak implementation. To do that, it's necessary to make
>> read_from_oldmem() public so that it can be used outside of vmcore.c.
>>
>> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
>> ---
>>  arch/x86/kernel/crash_dump_64.c |  5 +++++
>>  fs/proc/vmcore.c                |  8 ++++----
>>  include/linux/crash_dump.h      | 14 ++++++++++++++
>>  include/linux/mem_encrypt.h     |  1 -
>>  4 files changed, 23 insertions(+), 5 deletions(-)
>
> Does not seem to apply to today's or yesterdays master.

It assumes the presence of the two patches I mentioned in the cover
letter. Only one of them is in master.

I hadn't realized the s390 virtio patches were on their way to upstream.
I was keeping an eye on the email thread but didn't see they were picked
up in the s390 pull request. I'll add a new patch to this series making
the corresponding changes to s390's <asm/mem_encrypt.h> as well.

--
Thiago Jung Bauermann
IBM Linux Technology Center

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig
  2019-07-12 16:04   ` Thomas Gleixner
@ 2019-07-12 23:35     ` Thiago Jung Bauermann
  0 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12 23:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	linuxppc-dev, Christoph Hellwig


Hello Thomas,

Thanks for quickly reviewing the patches.

Thomas Gleixner <tglx@linutronix.de> writes:

> On Fri, 12 Jul 2019, Thiago Jung Bauermann wrote:
>
>> powerpc and s390 are going to use this feature as well, so put it in a
>> generic location.
>> 
>> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
>
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

Thanks!

-- 
Thiago Jung Bauermann
IBM Linux Technology Center
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-12  7:13   ` Christoph Hellwig
@ 2019-07-12 23:42     ` Thiago Jung Bauermann
  0 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-12 23:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Halil Pasic, iommu, Ingo Molnar,
	Borislav Petkov, Lendacky, Thomas, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Alexey Dobriyan


[ Cc'ing Tom Lendacky which I forgot to do earlier. Sorry about that. ]

Hello Christoph,

Christoph Hellwig <hch@lst.de> writes:

> Honestly I think this code should go away without any replacement.
> There is no reason why we should have a special debug printk just
> for one specific reason why there is a requirement for a large DMA
> mask.

Makes sense. I'll submit a v2 which just removes this code.

-- 
Thiago Jung Bauermann
IBM Linux Technology Center
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 15:42           ` Halil Pasic
@ 2019-07-13  8:08             ` Christoph Hellwig
  0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2019-07-13  8:08 UTC (permalink / raw)
  To: Halil Pasic
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-fsdevel, Thomas Gleixner,
	linuxppc-dev, Christoph Hellwig

On Fri, Jul 12, 2019 at 05:42:49PM +0200, Halil Pasic wrote:
> 
> Will do! I guess I should do the patch against the for-next branch of the
> dma-mapping tree. But that branch does not have the s390 support patches (yet?).
> To fix it I need both e67a5ed1f86f and 64e1f0c531d1 "s390/mm: force
> swiotlb for protected virtualization" (Halil Pasic, 2018-09-13). Or
> should I wait for e67a5ed1f86f landing in mainline?

I've rebased the dma-mapping for-next branch to latest mainline as of
today that has both commits.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-12 21:55     ` Thiago Jung Bauermann
@ 2019-07-15 14:03       ` Halil Pasic
  2019-07-15 14:30         ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Halil Pasic @ 2019-07-15 14:03 UTC (permalink / raw)
  To: Thiago Jung Bauermann, Janosch Frank
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu, Ingo Molnar,
	Borislav Petkov, Lendacky, Thomas, H. Peter Anvin, linux-fsdevel,
	Thomas Gleixner, linuxppc-dev, Christoph Hellwig

On Fri, 12 Jul 2019 18:55:47 -0300
Thiago Jung Bauermann <bauerman@linux.ibm.com> wrote:

> 
> [ Cc'ing Tom Lendacky which I forgot to do earlier. Sorry about that. ]
> 
> Hello Halil,
> 
> Thanks for the quick review.
> 
> Halil Pasic <pasic@linux.ibm.com> writes:
> 
> > On Fri, 12 Jul 2019 02:36:31 -0300
> > Thiago Jung Bauermann <bauerman@linux.ibm.com> wrote:
> >
> >> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
> >> appear in generic kernel code because it forces non-x86 architectures to
> >> define the sev_active() function, which doesn't make a lot of sense.
> >
> > sev_active() might be just bad (too specific) name for a general
> > concept. s390 code defines it drives the right behavior in
> > kernel/dma/direct.c (which uses it).
> 
> I thought about that but couldn't put my finger on a general concept.
> Is it "guest with memory inaccessible to the host"?
> 

Well, force_dma_unencrypted() is a much better name thatn sev_active():
s390 has no AMD SEV, that is sure, but for virtio to work we do need to
make our dma accessible to the hypervisor. Yes, your "guest with memory
inaccessible to the host" shows into the right direction IMHO.
Unfortunately I don't have too many cycles to spend on this right now.

> Since your proposed definiton for force_dma_unencrypted() is simply to
> make it equivalent to sev_active(), I thought it was more
> straightforward to make each arch define force_dma_unencrypted()
> directly.

I did not mean to propose equivalence. I intended to say the name
sev_active() is not suitable for a common concept. On the other hand
we do have a common concept -- as common code needs to do or not do
things depending on whether "memory is protected/encrypted" or not. I'm
fine with the name force_dma_unencrypted(), especially because I don't
have a better name.

> 
> Also, does sev_active() drive the right behavior for s390 in
> elfcorehdr_read() as well?
> 

AFAIU, since s390 does not override it boils down to the same, whether
sev_active() returns true or false. I'm no expert in that area, but I
strongly hope that is the right behavior. @Janosch: can you help me
out with this one?

> >> To solve this problem, add an x86 elfcorehdr_read() function to override
> >> the generic weak implementation. To do that, it's necessary to make
> >> read_from_oldmem() public so that it can be used outside of vmcore.c.
> >>
> >> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> >> ---
> >>  arch/x86/kernel/crash_dump_64.c |  5 +++++
> >>  fs/proc/vmcore.c                |  8 ++++----
> >>  include/linux/crash_dump.h      | 14 ++++++++++++++
> >>  include/linux/mem_encrypt.h     |  1 -
> >>  4 files changed, 23 insertions(+), 5 deletions(-)
> >
> > Does not seem to apply to today's or yesterdays master.
> 
> It assumes the presence of the two patches I mentioned in the cover
> letter. Only one of them is in master.
> 
> I hadn't realized the s390 virtio patches were on their way to upstream.
> I was keeping an eye on the email thread but didn't see they were picked
> up in the s390 pull request. I'll add a new patch to this series making
> the corresponding changes to s390's <asm/mem_encrypt.h> as well.
> 

Being on cc for your patch made me realize that things got broken on
s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
to benefit from your cleanups. I think with your cleanups and that patch
of mine both sev_active() and sme_active() can be removed. Feel free to
do so. If not, I can attend to it as well.

Regards,
Halil

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-15 14:03       ` Halil Pasic
@ 2019-07-15 14:30         ` Christoph Hellwig
  2019-07-15 15:44           ` Lendacky, Thomas
  2019-07-15 20:14           ` Thiago Jung Bauermann
  0 siblings, 2 replies; 23+ messages in thread
From: Christoph Hellwig @ 2019-07-15 14:30 UTC (permalink / raw)
  To: Halil Pasic
  Cc: linux-s390, Mike Anderson, Janosch Frank, Konrad Rzeszutek Wilk,
	Robin Murphy, x86, Ram Pai, linux-kernel, Alexey Dobriyan, iommu,
	Ingo Molnar, Borislav Petkov, Lendacky, Thomas, H. Peter Anvin,
	linux-fsdevel, Thomas Gleixner, linuxppc-dev, Christoph Hellwig

On Mon, Jul 15, 2019 at 04:03:17PM +0200, Halil Pasic wrote:
> > I thought about that but couldn't put my finger on a general concept.
> > Is it "guest with memory inaccessible to the host"?
> > 
> 
> Well, force_dma_unencrypted() is a much better name thatn sev_active():
> s390 has no AMD SEV, that is sure, but for virtio to work we do need to
> make our dma accessible to the hypervisor. Yes, your "guest with memory
> inaccessible to the host" shows into the right direction IMHO.
> Unfortunately I don't have too many cycles to spend on this right now.

In x86 it means that we need to remove dma encryption using
set_memory_decrypted before using it for DMA purposes.  In the SEV
case that seems to be so that the hypervisor can access it, in the SME
case that Tom just fixes it is because there is an encrypted bit set
in the physical address, and if the device doesn't support a large
enough DMA address the direct mapping code has to encrypt the pages
used for the contigous allocation.

> Being on cc for your patch made me realize that things got broken on
> s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
> to benefit from your cleanups. I think with your cleanups and that patch
> of mine both sev_active() and sme_active() can be removed. Feel free to
> do so. If not, I can attend to it as well.

Yes, I think with the dma-mapping fix and this series sme_active and
sev_active should be gone from common code.  We should also be able
to remove the exports x86 has for them.

I'll wait a few days and will then feed the dma-mapping fix to Linus,
it might make sense to either rebase Thiagos series on top of the
dma-mapping for-next branch, or wait a few days before reposting.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-15 14:30         ` Christoph Hellwig
@ 2019-07-15 15:44           ` Lendacky, Thomas
  2019-07-15 20:14           ` Thiago Jung Bauermann
  1 sibling, 0 replies; 23+ messages in thread
From: Lendacky, Thomas @ 2019-07-15 15:44 UTC (permalink / raw)
  To: Christoph Hellwig, Halil Pasic
  Cc: linux-s390, Mike Anderson, Janosch Frank, Konrad Rzeszutek Wilk,
	Robin Murphy, x86, Ram Pai, linux-kernel, iommu, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-fsdevel, Thomas Gleixner,
	linuxppc-dev, Alexey Dobriyan

On 7/15/19 9:30 AM, Christoph Hellwig wrote:
> On Mon, Jul 15, 2019 at 04:03:17PM +0200, Halil Pasic wrote:
>>> I thought about that but couldn't put my finger on a general concept.
>>> Is it "guest with memory inaccessible to the host"?
>>>
>>
>> Well, force_dma_unencrypted() is a much better name thatn sev_active():
>> s390 has no AMD SEV, that is sure, but for virtio to work we do need to
>> make our dma accessible to the hypervisor. Yes, your "guest with memory
>> inaccessible to the host" shows into the right direction IMHO.
>> Unfortunately I don't have too many cycles to spend on this right now.
> 
> In x86 it means that we need to remove dma encryption using
> set_memory_decrypted before using it for DMA purposes.  In the SEV
> case that seems to be so that the hypervisor can access it, in the SME
> case that Tom just fixes it is because there is an encrypted bit set
> in the physical address, and if the device doesn't support a large
> enough DMA address the direct mapping code has to encrypt the pages
> used for the contigous allocation.

Just a correction/clarification...

For SME, when a device doesn't support a large enough DMA address to
accommodate the encryption bit as part of the DMA address, the direct
mapping code has to provide un-encrypted pages. For un-encrypted pages,
the DMA address now does not include the encryption bit, making it
acceptable to the device. Since the device is now using a DMA address
without the encryption bit, the physical address in the CPU page table
must match (the call to set_memory_decrypted) so that both the device and
the CPU interact in the same way with the memory.

Thanks,
Tom

> 
>> Being on cc for your patch made me realize that things got broken on
>> s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
>> to benefit from your cleanups. I think with your cleanups and that patch
>> of mine both sev_active() and sme_active() can be removed. Feel free to
>> do so. If not, I can attend to it as well.
> 
> Yes, I think with the dma-mapping fix and this series sme_active and
> sev_active should be gone from common code.  We should also be able
> to remove the exports x86 has for them.
> 
> I'll wait a few days and will then feed the dma-mapping fix to Linus,
> it might make sense to either rebase Thiagos series on top of the
> dma-mapping for-next branch, or wait a few days before reposting.
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code
  2019-07-15 14:30         ` Christoph Hellwig
  2019-07-15 15:44           ` Lendacky, Thomas
@ 2019-07-15 20:14           ` Thiago Jung Bauermann
  1 sibling, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-15 20:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-s390, Mike Anderson, Janosch Frank, Konrad Rzeszutek Wilk,
	Robin Murphy, x86, Ram Pai, linux-kernel, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, Lendacky, Thomas, H. Peter Anvin,
	linux-fsdevel, Thomas Gleixner, linuxppc-dev, Alexey Dobriyan


Christoph Hellwig <hch@lst.de> writes:

> On Mon, Jul 15, 2019 at 04:03:17PM +0200, Halil Pasic wrote:
>> > I thought about that but couldn't put my finger on a general concept.
>> > Is it "guest with memory inaccessible to the host"?
>> >
>>
>> Well, force_dma_unencrypted() is a much better name thatn sev_active():
>> s390 has no AMD SEV, that is sure, but for virtio to work we do need to
>> make our dma accessible to the hypervisor. Yes, your "guest with memory
>> inaccessible to the host" shows into the right direction IMHO.
>> Unfortunately I don't have too many cycles to spend on this right now.
>
> In x86 it means that we need to remove dma encryption using
> set_memory_decrypted before using it for DMA purposes.  In the SEV
> case that seems to be so that the hypervisor can access it, in the SME
> case that Tom just fixes it is because there is an encrypted bit set
> in the physical address, and if the device doesn't support a large
> enough DMA address the direct mapping code has to encrypt the pages
> used for the contigous allocation.
>
>> Being on cc for your patch made me realize that things got broken on
>> s390. Thanks! I've sent out a patch that fixes protvirt, but we are going
>> to benefit from your cleanups. I think with your cleanups and that patch
>> of mine both sev_active() and sme_active() can be removed. Feel free to
>> do so. If not, I can attend to it as well.
>
> Yes, I think with the dma-mapping fix and this series sme_active and
> sev_active should be gone from common code.  We should also be able
> to remove the exports x86 has for them.
>
> I'll wait a few days and will then feed the dma-mapping fix to Linus,
> it might make sense to either rebase Thiagos series on top of the
> dma-mapping for-next branch, or wait a few days before reposting.

I'll rebase on top of dma-mapping/for-next and do the break up of patch
2 that you mentioned as well.

--
Thiago Jung Bauermann
IBM Linux Technology Center

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-12 16:09   ` Thomas Gleixner
@ 2019-07-18 19:47     ` Thiago Jung Bauermann
  0 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-18 19:47 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin, linux-fsdevel,
	linuxppc-dev, Christoph Hellwig


Thomas Gleixner <tglx@linutronix.de> writes:

> On Fri, 12 Jul 2019, Thiago Jung Bauermann wrote:
>> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
>> index b310a9c18113..f2e399fb626b 100644
>> --- a/include/linux/mem_encrypt.h
>> +++ b/include/linux/mem_encrypt.h
>> @@ -21,23 +21,11 @@
>>  
>>  #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
>>  
>> -#define sme_me_mask	0ULL
>> -
>> -static inline bool sme_active(void) { return false; }
>>  static inline bool sev_active(void) { return false; }
>
> You want to move out sev_active as well, the only relevant thing is
> mem_encrypt_active(). Everything SME/SEV is an architecture detail.

I'm sure you saw it. I addressed sev_active in a separate patch.

Thanks for reviewing this series!

>> +static inline bool mem_encrypt_active(void) { return false; }
>
> Thanks,
>
> 	tglx


-- 
Thiago Jung Bauermann
IBM Linux Technology Center

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-12  5:36 ` [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files Thiago Jung Bauermann
  2019-07-12  7:13   ` Christoph Hellwig
  2019-07-12 16:09   ` Thomas Gleixner
@ 2019-07-19  9:05   ` kbuild test robot
  2019-07-20  0:22     ` Thiago Jung Bauermann
  2 siblings, 1 reply; 23+ messages in thread
From: kbuild test robot @ 2019-07-19  9:05 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, kbuild-all, H. Peter Anvin,
	linux-fsdevel, Thomas Gleixner, linuxppc-dev, Christoph Hellwig

[-- Warning: decoded text below may be mangled --]
[-- Attachment #1: Type: text/plain; charset=unknown-8bit, Size: 30295 bytes --]

Hi Thiago,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.2 next-20190718]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Thiago-Jung-Bauermann/Remove-x86-specific-code-from-generic-headers/20190715-063006
config: s390-allnoconfig (attached as .config)
compiler: s390-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=s390 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single':
>> kernel/dma/swiotlb.c:461:6: error: implicit declaration of function 'mem_encrypt_active'; did you mean 'set_cpu_active'? [-Werror=implicit-function-declaration]
     if (mem_encrypt_active())
         ^~~~~~~~~~~~~~~~~~
         set_cpu_active
   cc1: some warnings being treated as errors

vim +461 kernel/dma/swiotlb.c

1b548f667c1487d lib/swiotlb.c           Jeremy Fitzhardinge   2008-12-16  442  
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  443  phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  444  				   dma_addr_t tbl_dma_addr,
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  445  				   phys_addr_t orig_addr, size_t size,
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  446  				   enum dma_data_direction dir,
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  447  				   unsigned long attrs)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  448  {
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  449  	unsigned long flags;
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  450  	phys_addr_t tlb_addr;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  451  	unsigned int nslots, stride, index, wrap;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  452  	int i;
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  453  	unsigned long mask;
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  454  	unsigned long offset_slots;
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  455  	unsigned long max_slots;
53b29c336830db4 kernel/dma/swiotlb.c    Dongli Zhang          2019-04-12  456  	unsigned long tmp_io_tlb_used;
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  457  
ac2cbab21f318e1 lib/swiotlb.c           Yinghai Lu            2013-01-24  458  	if (no_iotlb_memory)
ac2cbab21f318e1 lib/swiotlb.c           Yinghai Lu            2013-01-24  459  		panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
ac2cbab21f318e1 lib/swiotlb.c           Yinghai Lu            2013-01-24  460  
d7b417fa08d1187 lib/swiotlb.c           Tom Lendacky          2017-10-20 @461  	if (mem_encrypt_active())
aa4d0dc3e029b79 kernel/dma/swiotlb.c    Thiago Jung Bauermann 2019-07-12  462  		pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
648babb7078c631 lib/swiotlb.c           Tom Lendacky          2017-07-17  463  
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  464  	mask = dma_get_seg_boundary(hwdev);
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  465  
eb605a5754d050a lib/swiotlb.c           FUJITA Tomonori       2010-05-10  466  	tbl_dma_addr &= mask;
eb605a5754d050a lib/swiotlb.c           FUJITA Tomonori       2010-05-10  467  
eb605a5754d050a lib/swiotlb.c           FUJITA Tomonori       2010-05-10  468  	offset_slots = ALIGN(tbl_dma_addr, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
a5ddde4a558b3bd lib/swiotlb.c           Ian Campbell          2008-12-16  469  
a5ddde4a558b3bd lib/swiotlb.c           Ian Campbell          2008-12-16  470  	/*
a5ddde4a558b3bd lib/swiotlb.c           Ian Campbell          2008-12-16  471   	 * Carefully handle integer overflow which can occur when mask == ~0UL.
a5ddde4a558b3bd lib/swiotlb.c           Ian Campbell          2008-12-16  472   	 */
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  473  	max_slots = mask + 1
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  474  		    ? ALIGN(mask + 1, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  475  		    : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  476  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  477  	/*
602d9858f07c72e lib/swiotlb.c           Nikita Yushchenko     2017-01-11  478  	 * For mappings greater than or equal to a page, we limit the stride
602d9858f07c72e lib/swiotlb.c           Nikita Yushchenko     2017-01-11  479  	 * (and hence alignment) to a page size.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  480  	 */
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  481  	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
602d9858f07c72e lib/swiotlb.c           Nikita Yushchenko     2017-01-11  482  	if (size >= PAGE_SIZE)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  483  		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  484  	else
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  485  		stride = 1;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  486  
34814545890db60 lib/swiotlb.c           Eric Sesterhenn       2006-03-24  487  	BUG_ON(!nslots);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  488  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  489  	/*
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  490  	 * Find suitable number of IO TLB entries size that will fit this
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  491  	 * request and allocate a buffer from that IO TLB pool.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  492  	 */
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  493  	spin_lock_irqsave(&io_tlb_lock, flags);
60513ed06a41049 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  494  
60513ed06a41049 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  495  	if (unlikely(nslots > io_tlb_nslabs - io_tlb_used))
60513ed06a41049 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  496  		goto not_found;
60513ed06a41049 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  497  
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  498  	index = ALIGN(io_tlb_index, stride);
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  499  	if (index >= io_tlb_nslabs)
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  500  		index = 0;
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  501  	wrap = index;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  502  
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  503  	do {
a8522509200b460 lib/swiotlb.c           FUJITA Tomonori       2008-04-29  504  		while (iommu_is_span_boundary(index, nslots, offset_slots,
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  505  					      max_slots)) {
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  506  			index += stride;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  507  			if (index >= io_tlb_nslabs)
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  508  				index = 0;
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  509  			if (index == wrap)
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  510  				goto not_found;
681cc5cd3efbeaf lib/swiotlb.c           FUJITA Tomonori       2008-02-04  511  		}
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  512  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  513  		/*
a7133a15587b892 lib/swiotlb.c           Andrew Morton         2008-04-29  514  		 * If we find a slot that indicates we have 'nslots' number of
a7133a15587b892 lib/swiotlb.c           Andrew Morton         2008-04-29  515  		 * contiguous buffers, we allocate the buffers from that slot
a7133a15587b892 lib/swiotlb.c           Andrew Morton         2008-04-29  516  		 * and mark the entries as '0' indicating unavailable.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  517  		 */
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  518  		if (io_tlb_list[index] >= nslots) {
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  519  			int count = 0;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  520  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  521  			for (i = index; i < (int) (index + nslots); i++)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  522  				io_tlb_list[i] = 0;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  523  			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  524  				io_tlb_list[i] = ++count;
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  525  			tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  526  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  527  			/*
a7133a15587b892 lib/swiotlb.c           Andrew Morton         2008-04-29  528  			 * Update the indices to avoid searching in the next
a7133a15587b892 lib/swiotlb.c           Andrew Morton         2008-04-29  529  			 * round.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  530  			 */
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  531  			io_tlb_index = ((index + nslots) < io_tlb_nslabs
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  532  					? (index + nslots) : 0);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  533  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  534  			goto found;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  535  		}
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  536  		index += stride;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  537  		if (index >= io_tlb_nslabs)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  538  			index = 0;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  539  	} while (index != wrap);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  540  
b15a3891c916f32 lib/swiotlb.c           Jan Beulich           2008-03-13  541  not_found:
53b29c336830db4 kernel/dma/swiotlb.c    Dongli Zhang          2019-04-12  542  	tmp_io_tlb_used = io_tlb_used;
53b29c336830db4 kernel/dma/swiotlb.c    Dongli Zhang          2019-04-12  543  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  544  	spin_unlock_irqrestore(&io_tlb_lock, flags);
d0bc0c2a31c9500 lib/swiotlb.c           Christian König       2018-01-04  545  	if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
53b29c336830db4 kernel/dma/swiotlb.c    Dongli Zhang          2019-04-12  546  		dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
53b29c336830db4 kernel/dma/swiotlb.c    Dongli Zhang          2019-04-12  547  			 size, io_tlb_nslabs, tmp_io_tlb_used);
b907e20508d0246 kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  548  	return DMA_MAPPING_ERROR;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  549  found:
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  550  	io_tlb_used += nslots;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  551  	spin_unlock_irqrestore(&io_tlb_lock, flags);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  552  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  553  	/*
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  554  	 * Save away the mapping from the original address to the DMA address.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  555  	 * This is needed when we sync the memory.  Then we sync the buffer if
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  556  	 * needed.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  557  	 */
bc40ac66988a772 lib/swiotlb.c           Becky Bruce           2008-12-22  558  	for (i = 0; i < nslots; i++)
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  559  		io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  560  	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  561  	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
af51a9f1848ff50 lib/swiotlb.c           Alexander Duyck       2012-10-15  562  		swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  563  
e05ed4d1fad9e73 lib/swiotlb.c           Alexander Duyck       2012-10-15  564  	return tlb_addr;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  565  }
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  566  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  567  /*
d0c8ba40c6cc0fe lib/swiotlb.c           Yisheng Xie           2018-05-07  568   * tlb_addr is the physical address of the bounce buffer to unmap.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  569   */
61ca08c3220032d lib/swiotlb.c           Alexander Duyck       2012-10-15  570  void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  571  			      size_t size, enum dma_data_direction dir,
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  572  			      unsigned long attrs)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  573  {
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  574  	unsigned long flags;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  575  	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
61ca08c3220032d lib/swiotlb.c           Alexander Duyck       2012-10-15  576  	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
61ca08c3220032d lib/swiotlb.c           Alexander Duyck       2012-10-15  577  	phys_addr_t orig_addr = io_tlb_orig_addr[index];
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  578  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  579  	/*
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  580  	 * First, sync the memory before unmapping the entry
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  581  	 */
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  582  	if (orig_addr != INVALID_PHYS_ADDR &&
0443fa003fa199f lib/swiotlb.c           Alexander Duyck       2016-11-02  583  	    !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  584  	    ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
af51a9f1848ff50 lib/swiotlb.c           Alexander Duyck       2012-10-15  585  		swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  586  
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  587  	/*
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  588  	 * Return the buffer to the free list by setting the corresponding
af901ca181d92aa lib/swiotlb.c           André Goddard Rosa    2009-11-14  589  	 * entries to indicate the number of contiguous entries available.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  590  	 * While returning the entries to the free list, we merge the entries
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  591  	 * with slots below and above the pool being returned.
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  592  	 */
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  593  	spin_lock_irqsave(&io_tlb_lock, flags);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  594  	{
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  595  		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  596  			 io_tlb_list[index + nslots] : 0);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  597  		/*
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  598  		 * Step 1: return the slots to the free list, merging the
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  599  		 * slots with superceeding slots
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  600  		 */
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  601  		for (i = index + nslots - 1; i >= index; i--) {
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  602  			io_tlb_list[i] = ++count;
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  603  			io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  604  		}
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  605  		/*
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  606  		 * Step 2: merge the returned slots with the preceding slots,
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  607  		 * if available (non zero)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  608  		 */
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  609  		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  610  			io_tlb_list[i] = ++count;
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  611  
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  612  		io_tlb_used -= nslots;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  613  	}
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  614  	spin_unlock_irqrestore(&io_tlb_lock, flags);
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  615  }
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  616  
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  617  void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  618  			     size_t size, enum dma_data_direction dir,
d7ef1533a90f432 lib/swiotlb.c           Konrad Rzeszutek Wilk 2010-05-28  619  			     enum dma_sync_target target)
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  620  {
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  621  	int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  622  	phys_addr_t orig_addr = io_tlb_orig_addr[index];
bc40ac66988a772 lib/swiotlb.c           Becky Bruce           2008-12-22  623  
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  624  	if (orig_addr == INVALID_PHYS_ADDR)
8e0629c1d4ce86c lib/swiotlb.c           Jan Beulich           2014-06-02  625  		return;
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  626  	orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1);
df336d1c7b6fd51 lib/swiotlb.c           Keir Fraser           2007-07-21  627  
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  628  	switch (target) {
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  629  	case SYNC_FOR_CPU:
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  630  		if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
af51a9f1848ff50 lib/swiotlb.c           Alexander Duyck       2012-10-15  631  			swiotlb_bounce(orig_addr, tlb_addr,
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  632  				       size, DMA_FROM_DEVICE);
34814545890db60 lib/swiotlb.c           Eric Sesterhenn       2006-03-24  633  		else
34814545890db60 lib/swiotlb.c           Eric Sesterhenn       2006-03-24  634  			BUG_ON(dir != DMA_TO_DEVICE);
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  635  		break;
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  636  	case SYNC_FOR_DEVICE:
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  637  		if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
af51a9f1848ff50 lib/swiotlb.c           Alexander Duyck       2012-10-15  638  			swiotlb_bounce(orig_addr, tlb_addr,
fbfda893eb570bb lib/swiotlb.c           Alexander Duyck       2012-10-15  639  				       size, DMA_TO_DEVICE);
34814545890db60 lib/swiotlb.c           Eric Sesterhenn       2006-03-24  640  		else
34814545890db60 lib/swiotlb.c           Eric Sesterhenn       2006-03-24  641  			BUG_ON(dir != DMA_FROM_DEVICE);
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  642  		break;
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  643  	default:
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  644  		BUG();
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  645  	}
de69e0f0b38a467 lib/swiotlb.c           John W. Linville      2005-09-29  646  }
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  647  
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  648  /*
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  649   * Create a swiotlb mapping for the buffer at @phys, and in case of DMAing
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  650   * to the device copy the data into it as well.
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  651   */
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  652  bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr,
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  653  		size_t size, enum dma_data_direction dir, unsigned long attrs)
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  654  {
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  655  	trace_swiotlb_bounced(dev, *dma_addr, size, swiotlb_force);
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  656  
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  657  	if (unlikely(swiotlb_force == SWIOTLB_NO_FORCE)) {
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  658  		dev_warn_ratelimited(dev,
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  659  			"Cannot do DMA to address %pa\n", phys);
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  660  		return false;
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  661  	}
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  662  
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  663  	/* Oh well, have to allocate and map a bounce buffer. */
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  664  	*phys = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start),
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  665  			*phys, size, dir, attrs);
b907e20508d0246 kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  666  	if (*phys == DMA_MAPPING_ERROR)
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  667  		return false;
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  668  
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  669  	/* Ensure that the address returned is DMA'ble */
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  670  	*dma_addr = __phys_to_dma(dev, *phys);
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  671  	if (unlikely(!dma_capable(dev, *dma_addr, size))) {
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  672  		swiotlb_tbl_unmap_single(dev, *phys, size, dir,
c4dae366925f929 kernel/dma/swiotlb.c    Christoph Hellwig     2018-08-20  673  			attrs | DMA_ATTR_SKIP_CPU_SYNC);
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  674  		return false;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  675  	}
309df0c503c35fb lib/swiotlb.c           Arthur Kepner         2008-04-29  676  
55897af63091ebc kernel/dma/swiotlb.c    Christoph Hellwig     2018-12-03  677  	return true;
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  678  }
^1da177e4c3f415 arch/ia64/lib/swiotlb.c Linus Torvalds        2005-04-16  679  
abe420bfae528c9 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  680  size_t swiotlb_max_mapping_size(struct device *dev)
abe420bfae528c9 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  681  {
abe420bfae528c9 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  682  	return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
abe420bfae528c9 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  683  }
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  684  
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  685  bool is_swiotlb_active(void)
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  686  {
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  687  	/*
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  688  	 * When SWIOTLB is initialized, even if io_tlb_start points to physical
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  689  	 * address zero, io_tlb_end surely doesn't.
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  690  	 */
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  691  	return io_tlb_end != 0;
492366f7b423725 kernel/dma/swiotlb.c    Joerg Roedel          2019-02-07  692  }
45ba8d5d061b134 kernel/dma/swiotlb.c    Linus Torvalds        2019-03-10  693  
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  694  #ifdef CONFIG_DEBUG_FS
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  695  
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  696  static int __init swiotlb_create_debugfs(void)
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  697  {
1be51474f99bcfd kernel/dma/swiotlb.c    Greg Kroah-Hartman    2019-06-12  698  	struct dentry *root;
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  699  
1be51474f99bcfd kernel/dma/swiotlb.c    Greg Kroah-Hartman    2019-06-12  700  	root = debugfs_create_dir("swiotlb", NULL);
1be51474f99bcfd kernel/dma/swiotlb.c    Greg Kroah-Hartman    2019-06-12  701  	debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs);
1be51474f99bcfd kernel/dma/swiotlb.c    Greg Kroah-Hartman    2019-06-12  702  	debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used);
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  703  	return 0;
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  704  }
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  705  
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  706  late_initcall(swiotlb_create_debugfs);
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  707  
71602fe6d4e9291 kernel/dma/swiotlb.c    Dongli Zhang          2019-01-18  708  #endif

:::::: The code at line 461 was first introduced by commit
:::::: d7b417fa08d1187923c270bc33a3555c2fcff8b9 x86/mm: Add DMA support for SEV memory encryption

:::::: TO: Tom Lendacky <thomas.lendacky@amd.com>
:::::: CC: Thomas Gleixner <tglx@linutronix.de>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 5848 bytes --]

[-- Attachment #3: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files
  2019-07-19  9:05   ` kbuild test robot
@ 2019-07-20  0:22     ` Thiago Jung Bauermann
  0 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-20  0:22 UTC (permalink / raw)
  To: kbuild test robot
  Cc: linux-s390, Mike Anderson, Konrad Rzeszutek Wilk, Robin Murphy,
	x86, Ram Pai, linux-kernel, Alexey Dobriyan, Halil Pasic, iommu,
	Ingo Molnar, Borislav Petkov, kbuild-all, H. Peter Anvin,
	linux-fsdevel, Thomas Gleixner, linuxppc-dev, Christoph Hellwig


kbuild test robot <lkp@intel.com> writes:

> Hi Thiago,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on linus/master]
> [cannot apply to v5.2 next-20190718]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
>
> url:    https://github.com/0day-ci/linux/commits/Thiago-Jung-Bauermann/Remove-x86-specific-code-from-generic-headers/20190715-063006
> config: s390-allnoconfig (attached as .config)
> compiler: s390-linux-gcc (GCC) 7.4.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         GCC_VERSION=7.4.0 make.cross ARCH=s390
>
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>    kernel/dma/swiotlb.c: In function 'swiotlb_tbl_map_single':
>>> kernel/dma/swiotlb.c:461:6: error: implicit declaration of function 'mem_encrypt_active'; did you mean 'set_cpu_active'? [-Werror=implicit-function-declaration]
>      if (mem_encrypt_active())
>          ^~~~~~~~~~~~~~~~~~
>          set_cpu_active
>    cc1: some warnings being treated as errors

This error was reported for v1 of the patch series. I wasn't able to
reproduce this problem on v1 but found a similar issue on v2.

I just did a build test of each patch of the latest version (v3) with an
s390 cross-toolchain and the config file from this report and didn't
find any build issues, so I believe this problem is solved.

--
Thiago Jung Bauermann
IBM Linux Technology Center
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, back to index

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-12  5:36 [PATCH 0/3] Remove x86-specific code from generic headers Thiago Jung Bauermann
2019-07-12  5:36 ` [PATCH 1/3] x86/Kconfig: Move ARCH_HAS_MEM_ENCRYPT to arch/Kconfig Thiago Jung Bauermann
2019-07-12 16:04   ` Thomas Gleixner
2019-07-12 23:35     ` Thiago Jung Bauermann
2019-07-12  5:36 ` [PATCH 2/3] DMA mapping: Move SME handling to x86-specific files Thiago Jung Bauermann
2019-07-12  7:13   ` Christoph Hellwig
2019-07-12 23:42     ` Thiago Jung Bauermann
2019-07-12 16:09   ` Thomas Gleixner
2019-07-18 19:47     ` Thiago Jung Bauermann
2019-07-19  9:05   ` kbuild test robot
2019-07-20  0:22     ` Thiago Jung Bauermann
2019-07-12  5:36 ` [PATCH 3/3] fs/core/vmcore: Move sev_active() reference to x86 arch code Thiago Jung Bauermann
2019-07-12 13:09   ` Halil Pasic
2019-07-12 14:08     ` Christoph Hellwig
2019-07-12 14:51       ` Halil Pasic
2019-07-12 15:11         ` Christoph Hellwig
2019-07-12 15:42           ` Halil Pasic
2019-07-13  8:08             ` Christoph Hellwig
2019-07-12 21:55     ` Thiago Jung Bauermann
2019-07-15 14:03       ` Halil Pasic
2019-07-15 14:30         ` Christoph Hellwig
2019-07-15 15:44           ` Lendacky, Thomas
2019-07-15 20:14           ` Thiago Jung Bauermann

IOMMU Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-iommu/0 linux-iommu/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-iommu linux-iommu/ https://lore.kernel.org/linux-iommu \
		iommu@lists.linux-foundation.org
	public-inbox-index linux-iommu

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.linux-foundation.lists.iommu


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git